More How To Test This? episodes

How To Test With ContextQA - Deep Barot thumbnail

How To Test With ContextQA - Deep Barot

Published 23 Apr 2026

Duration: 00:48:27

The evolution of software testing highlights the shift from manual to AI-driven automated QA, emphasizing context-aware tools to address siloed teams, outdated methods, and inefficiencies while integrating cross-functional expertise, improving ROI up to 20x, and balancing automation with human judgment in a collaborative, adaptive QA future.

Episode Description

Deep Barot, founder and CEO of ContextQA, an IBM partner and G2 recognized platform, built a company solving a problem he experienced as a DevOps engi...

Overview

The podcast discusses the evolution of software testing through the lens of Deep Bharat, a software engineer who transitioned into QA after identifying industry pain points such as siloed teams, manual processes, and inadequate automation. He founded Context QA, a company focused on context-aware AI solutions to address these challenges, emphasizing the need for accessible automation tools that reduce repetitive tasks and empower teams to focus on judgment-driven testing. Key themes include the shortcomings of traditional QA methods, such as reliance on fragile tools like Selenium, and the importance of integrating contextual factorsproduct, design, development, and DevOps perspectivesinto testing frameworks to ensure comprehensive coverage. The approach aims to align QA practices with business outcomes, streamline CI/CD pipelines, and address resource constraints like limited documentation and prioritization of automation.

A central focus is the role of AI in transforming QA, advocating for a hybrid model where AI handles 99% of repetitive tasks, while humans manage edge cases and provide strategic oversight. The podcast highlights Context QAs framework, which uses AI to automate test case generation, organize tests by priority, and integrate with tools like Jira for end-to-end lifecycle management. This approach is shown to deliver significant ROI (12x20x) by reducing release cycles and improving defect tracking. Challenges include overcoming team resistance to AI, ensuring secure and privacy-compliant use of AI models, and avoiding misconceptions such as treating AI as a standalone solution. The discussion also underscores the shift toward a shared QA responsibility across teams and the need for QA professionals to adapt by learning AI-specific skills while retaining problem-solving and collaboration expertise.

The podcast explores broader industry shifts, including the move from headcount-based QA consulting to outcome-driven models and the importance of aligning AI adoption with business goals. It advises QA professionals to embrace AI as a tool to enhance, rather than replace, human expertise, emphasizing adaptability, context-aware testing, and practical skills like understanding AI limitations and integrating tools into existing workflows. Additionally, it addresses the evolving role of testers in 2026, encouraging a product-owner mindset, AI proficiency, and a focus on solving business problems rather than just technical tasks. The conversation concludes with practical strategies for tool selection, team collaboration, and fostering a culture of continuous learning in QA roles.

Recent Episodes of How To Test This?

8 Apr 2026 How To Test As Agentic Quality Engineer Dragan Spiridonov

The evolution of quality engineering integrates AI through Agent Quality Engineering (AQE), emphasizing structured frameworks like PACT, human oversight, and the balance between AI efficiency and foundational QA skills while addressing challenges like alignment with business goals and ethical considerations.

29 Mar 2026 How to Test With Agentic AI automation Tool - Geosley Andrades

Agentic automation AI is reshaping QA by enabling autonomous testing, addressing skill gaps and scalability through adaptive, no-code solutions, and urging professionals to upskill in AI/ML, prioritize business logic, and balance automation with human oversight for reliable, secure, and context-aware quality assurance.

27 Mar 2026 How to Test a Release Oleksandr Bolzhelarskyi

Strategies for effective software testing emphasize separating QA from quality management, addressing role confusion and oversight gaps, utilizing process improvements and tools, balancing speed with stability through rigorous regression testing, fostering collaboration between teams, and leveraging automation and continuous improvement to ensure reliable releases.

19 Mar 2026 How to Test with Independent QA | Guest: Tudor Brad

The evolving role of QA in software development emphasizes independent teams for unbiased testing, addresses challenges like post-launch failures and AI tool adaptation, integrates proactive security and ethics, and highlights future trends in AI-driven QA and ethical compliance.

17 Mar 2026 How to Test This with AI and MCP - Deepak Kamboj

AI integration in test automation streamlines processes via agents generating test cases, analyzing failures, and executing accessibility/performance checks with tools like Playwright, leveraging frameworks like MCP, TypeScript/Python workflows, while addressing challenges such as context awareness, flaky tests, and advancing toward autonomous, scalable AI-driven testing strategies.

More How To Test This? episodes