More Test Guild episodes

AI Testing: How to Ensure Quality in Non-Deterministic Systems with Adam Sandman thumbnail

AI Testing: How to Ensure Quality in Non-Deterministic Systems with Adam Sandman

Published 10 Mar 2026

Duration: 43:20

AI transforms quality engineering by introducing challenges in testing non-deterministic systems, evolving QA roles, and necessitating new tools, collaboration, and human-AI synergy for compliance, risk management, and efficiency in an increasingly complex software landscape.

Episode Description

How do you ensure software quality when the system you're testing doesn't give the same output twice? That's the core challenge facing every QA team b...

Overview

The podcast explores the transformative impact of AI on software development and quality engineering, emphasizing both opportunities and challenges. AI tools are lowering barriers to entry by enabling non-experts to build applications, accelerating development cycles, and increasing code complexity. This shift has raised the stakes for testing, as non-deterministic AI systemslike chatbots and agentic toolsproduce variable outputs requiring new testing methodologies. Quality professionals face heightened risks, particularly in critical sectors such as manufacturing and healthcare, where AI-driven failures could have severe consequences. The podcast highlights the growing need for testers to evolve their strategies, embracing AI as a supplementary tool rather than a replacement, while advocating for testing to be repositioned as a strategic business function rather than a cost center.

Key challenges include adapting to non-deterministic systems, which demand risk-based testing approaches rather than complete coverage, and addressing gaps in traditional testing frameworks. The discussion underscores the importance of decomposing applications into deterministic and non-deterministic components for targeted testing, leveraging specialized AI tools like SureWire for statistical evaluation of AI behavior. Cross-disciplinary collaboration is deemed essential, integrating risk management, data science, and domain expertise to address AI-specific challenges. The evolution of testing now also involves AI-assisted refactoring of legacy systems, compliance with regulatory standards, and ensuring alignment between requirements, code, and user expectations.

The podcast emphasizes that while AI can enhance productivity through automation and reduce manual tasks, its integration requires careful risk management, human oversight, and tailored strategies. Testers must adapt by upskilling in AI literacy, data analysis, and hybrid workflows, while organizations prioritize incremental AI adoption to address immediate pain points before tackling complex challenges. The future vision includes AI-driven development cycles, where quality assurance transitions from a purely technical role to one that aligns with business outcomes, safety, and compliance, underscoring the critical role of testers as guardians of quality in an increasingly AI-integrated landscape.

Recent Episodes of Test Guild

25 Mar 2026 AI Testing: How Solo Testers Stay Confident in Releases with Christine Pinto

Solo QA testers face isolation, imposter syndrome, and challenges in identifying edge cases or accessibility issues, with AI-driven code complicating quality assurance, but tools like Whizzo and Rizzo, community collaboration, and balancing AI automation with human oversight and ethical considerations offer solutions to enhance testing efficiency and product reliability.

More Test Guild episodes