Field Notes
Dispatches on interpretability, topology, and the craft of AI auditing.
Apr 16, 2026
The Actuarial AI Gap: Why Insurance Models Break Where Banking Models Bend
Insurance AI faces unique regulatory challenges that make standard banking compliance frameworks dangerously inadequate.
Apr 15, 2026
Domain Drift and Circuit Specialization: What Credit Risk Models Reveal About LLM Architecture
Circuit tracing in financial LLMs reveals how domain-specific training creates specialized neural pathways that standard interpretability methods completely miss.
Apr 14, 2026
Self-Reporting vs. Independent Verification: The False Promise of Explainable AI
Why AI systems explaining their own decisions creates a fundamental verification gap that standard explainability tools cannot close.
Apr 10, 2026
The Multi-Model Audit Problem: Why Your AI Governance Stack Is Already Obsolete
Enterprise AI deployments across multiple vendors create audit blind spots that single-model governance frameworks cannot address.
Apr 8, 2026
When Statistical Tests Miss the Point: How Topology Exposes Hidden AI Decision Patterns
Topological data analysis reveals AI decision patterns that traditional statistical methods systematically overlook, exposing critical model behaviors in production systems.
Apr 7, 2026
Mechanistic Interpretability's Production Problem: Why Cross-Layer Analysis Won't Scale
The most promising interpretability techniques break down when applied to production AI systems at enterprise scale.
Apr 6, 2026
The EU AI Act's Documentation Trap: Why Banks Are Building Compliance Theater
The EU AI Act's documentation requirements are pushing banks toward performative compliance that misses the actual risks in their AI systems.
Mar 31, 2026
The LLM Compliance Gap: Why Model Risk Management Is Fighting the Last War
Traditional compliance frameworks fail catastrophically when applied to LLM-based decisions, requiring fundamentally different audit trail architectures.
Mar 27, 2026
Why Circuit Tracing Changes the Audit Conversation
Traditional model evaluations tell you what a model gets wrong. Circuit tracing tells you why. That distinction matters more than most teams realize.