More Hanselminutes with Scott Hanselman episodes

A cognition engine for science with Allen Stewart thumbnail

A cognition engine for science with Allen Stewart

Published 12 Mar 2026

Duration: 00:30:44

TXTextControl's platform-agnostic .NET integration across Windows, Linux, and cloud environments, paired with AI-driven memory systems that boost scientific research efficiency, reduce token usage, and enable drug discovery and data analysis through cognitive engines balancing fast/slow thinking, dynamic memory storage, and knowledge-grounded outputs, while addressing challenges like token costs and hallucinations via lab-in-the-loop collaboration and future memory-driven problem-solving research.

Episode Description

Scott Hanselman sits down with Allen Stewart, Partner Director of Software Engineering at Microsoft, to explore how AI agents with persistent memory a...

Overview

The podcast discusses updates to TXTextControl, emphasizing its platform independence, supporting .NET applications across Windows, Linux, and cloud environments like Azure App Services. It highlights deployment options such as Docker, Kubernetes, and direct Azure integration, as well as seamless integration with ASP.NET Core and Angular for document editing, collaboration, and PDF processing. A separate segment explores AI and memory systems, focusing on leveraging stored computational data (memory) to enhance AI efficiency and accelerate scientific research. Alan Stewart discusses how reusing prior computations can reduce token usage by 150 million in scientific applications, such as drug discovery for Alzheimers, and the role of advanced memory systems with confidence scores to guide decision-making. The Cognitive Engine, modeled after human thought processes, balances fast (intuitive) and slow (analytical) reasoning, storing partial memories for future research and dynamically rerouting around errors or tool failures. Memory systems interact with AI through relevance scoring, prioritizing high-value data for tasks like scientific analysis while preserving explored territory to avoid redundant work. The cognitive engines resilience and adaptability are demonstrated in scenarios like autonomous 14-day research investigations involving hundreds of millions of tokens and dynamic memory storage across complex domains.

The discussion extends to AIs integration in scientific research, combining large language models (LLMs), specialized scientific models, and tools like Clio, a cognitive engine designed to mimic human reasoning. Key challenges include minimizing hallucinations through grounding AI in testable data, using graph-based knowledge retrieval, and preventing flawed memories from influencing outcomes. The role of memory in scientific workflows is underscored, with examples of repurposing partial data or fog of war strategies in drug repurposing studies. The systems ability to evolve through iterative learning, store 1,500 inflammation-related memories, and operate across scientific and office workflows (e.g., OCR-based email generation) is highlighted. Ethical considerations stress that AI remains a tool to augment human expertise rather than replace it, with scientists retaining control over memory selection and validation. Future research directions include refining memory-driven problem-solving in science and expanding the cognitive engines application to diverse fields.

Recent Episodes of Hanselminutes with Scott Hanselman

19 Mar 2026 Building the Internet with sendmail's Eric Allman

TextControl's platform-agnostic .NET applications with Docker/Kubernetes/Azure deployment and ASP.NET Core/Angular integrations for document workflows, alongside explorations of internet history, AI advancements, legacy protocols, and IoT/home automation projects.

5 Feb 2026 The AI Vampire with Gas Town's Steve Yegge

Concerns about AI's impact on software development and work culture are explored, highlighting risks of over-reliance, burnout, and loss of human connection.

More Hanselminutes with Scott Hanselman episodes