More How To Test This? episodes

How to Test With Agentic AI automation Tool - Geosley Andrades thumbnail

How to Test With Agentic AI automation Tool - Geosley Andrades

Published 29 Mar 2026

Duration: 00:50:33

Agentic automation AI is reshaping QA by enabling autonomous testing, addressing skill gaps and scalability through adaptive, no-code solutions, and urging professionals to upskill in AI/ML, prioritize business logic, and balance automation with human oversight for reliable, secure, and context-aware quality assurance.

Episode Description

Teams don't fail at AI adoption because of the technology. But because of how they evaluate and adopt it.Geosley Andrades, Product Evangelist & Commun...

Overview

The podcast explores the impact of agentic automation AI on quality assurance (QA) teams, emphasizing its role in transforming traditional testing practices. Agentic AI leverages large language models (LLMs) to automate tasks like test case generation, synthetic data creation, and code writing, reducing reliance on deep coding expertise. Key features include autonomous discovery, where AI analyzes application interfaces to identify issues, and self-healing capabilities that adapt to evolving applications, minimizing maintenance burdens. However, challenges persist in the testing industry, such as skill gaps in coding and AI, maintenance complexities with conventional frameworks, and scalability issues in manual processes. To address these, experts recommend starting with small AI implementations, collaborating with domain experts, and investing in training to bridge knowledge gaps.

The discussion highlights the need for strategic AI adoption, focusing on tools that support end-to-end testing across web, APIs, and enterprise systems (e.g., Salesforce, SAP) while prioritizing design principles and contextual intelligence to avoid hallucinations. Misconceptions about AI include over-reliance on automation without proper validation or context, which can lead to flawed outcomes. Effective use requires balancing AI-generated outputs with human oversight, ensuring alignment with business logic and testing rigor. Enterprise applications face unique challenges, such as frequent UI updates disrupting automation scripts, which agentic platforms like ExcelQ address through live asset updates and pre-built accelerators. The conversation also underscores the importance of metrics like bug quality, automation efficiency, and ROI evaluation to guide AI tool selection and implementation.

Finally, the podcast stresses the evolving role of QA professionals in an AI-driven era. While agentic systems accelerate testing, human expertise remains critical for prompt engineering, validating AI outputs, and understanding design principles. Career advice emphasizes continuous learning, adaptability, and mastering AI-specific concepts (e.g., RAG pipelines, vector databases). Testers are encouraged to embrace AI as a tool to enhance efficiency rather than replace their roles, focusing on contextual awareness and fast-paced testing to align with modern development cycles. The shift from traditional methods to AI-integrated testing demands a balance between automation and human judgment, ensuring robust, scalable QA frameworks.

Recent Episodes of How To Test This?

27 Mar 2026 How to Test a Release Oleksandr Bolzhelarskyi

Strategies for effective software testing emphasize separating QA from quality management, addressing role confusion and oversight gaps, utilizing process improvements and tools, balancing speed with stability through rigorous regression testing, fostering collaboration between teams, and leveraging automation and continuous improvement to ensure reliable releases.

19 Mar 2026 How to Test with Independent QA | Guest: Tudor Brad

The evolving role of QA in software development emphasizes independent teams for unbiased testing, addresses challenges like post-launch failures and AI tool adaptation, integrates proactive security and ethics, and highlights future trends in AI-driven QA and ethical compliance.

17 Mar 2026 How to Test This with AI and MCP - Deepak Kamboj

AI integration in test automation streamlines processes via agents generating test cases, analyzing failures, and executing accessibility/performance checks with tools like Playwright, leveraging frameworks like MCP, TypeScript/Python workflows, while addressing challenges such as context awareness, flaky tests, and advancing toward autonomous, scalable AI-driven testing strategies.

7 Mar 2026 How to test with HIST - Ruslan Desyatnikov

The podcast discusses a transformative approach to Quality Assurance that emphasizes proactive risk mitigation, business alignment, and elevating the QA profession through critical thinking and AI adaptation.

More How To Test This? episodes