Approach Cobalt Research Team Docs Demo Get In Touch

Field Notes

Dispatches on interpretability, topology, and the craft of AI auditing.

Apr 27, 2026

When Models Agree, Start Worrying: The Consensus Trap in AI Auditing

Why unanimous AI consensus signals decision fragility, and how disagreement patterns reveal the boundaries of reliable model performance.

cross-model validationAI auditingmodel risk management

Apr 24, 2026

The Immutable AI Decision Record: Why Financial AI Needs Cryptographic Audit Trails

Traditional AI audit logs fail when decisions can be retroactively altered, making cryptographic immutability essential for financial AI governance.

ai-auditingregulatory-compliancedecision-provenance

Apr 23, 2026

The Open Weight Paradox: Why Enterprise AI Control Comes at the Cost of Model Quality

Open-weight models offer enterprises unprecedented interpretability and control, but the technical tradeoffs reveal deeper questions about what regulated industries actually need from AI systems.

open-weight-modelsenterprise-aimodel-auditing

Apr 22, 2026

The Silent Shift: How LLMs Change Their Minds Without Changing Their Scores

Why tracking accuracy misses the most dangerous form of AI model drift: when LLMs maintain performance while fundamentally altering their decision-making patterns.

model-driftllm-monitoringai-auditing

Apr 21, 2026

The Interaction Depth Problem: Why Feature Crosses Hide the Most Dangerous AI Bias

Complex feature interactions create discrimination patterns that standard fairness metrics miss entirely, requiring new approaches to uncover proxy discrimination.

ai-bias-detectionalgorithmic-fairnessmodel-auditing

Apr 20, 2026

The Perturbation Paradox: Why Poking Models Reveals More Than Prompting Them

Systematic input manipulation exposes the real drivers of AI decisions better than any explainability feature the model provides itself.

perturbation-testinginterpretabilityai-auditing

Apr 17, 2026

The Prompt Injection Blind Spot: Why Banking AI Risk Frameworks Miss the Real Threat

Banks are applying traditional model risk frameworks to LLMs while ignoring the fundamental vulnerability that makes credit decision AI uniquely exploitable.

banking-ai-riskmodel-validationprompt-injection

Apr 16, 2026

The Actuarial AI Gap: Why Insurance Models Break Where Banking Models Bend

Insurance AI faces unique regulatory challenges that make standard banking compliance frameworks dangerously inadequate.

insurance-aigovernanceclaims-processing

Apr 15, 2026

Domain Drift and Circuit Specialization: What Credit Risk Models Reveal About LLM Architecture

Circuit tracing in financial LLMs reveals how domain-specific training creates specialized neural pathways that standard interpretability methods completely miss.

circuit-tracingenterprise-aimodel-risk

Apr 14, 2026

Self-Reporting vs. Independent Verification: The False Promise of Explainable AI

Why AI systems explaining their own decisions creates a fundamental verification gap that standard explainability tools cannot close.

decision-transparencyexplainable-aiaudit-methodology

Apr 10, 2026

The Multi-Model Audit Problem: Why Your AI Governance Stack Is Already Obsolete

Enterprise AI deployments across multiple vendors create audit blind spots that single-model governance frameworks cannot address.

model-agnosticgovernanceenterprise-ai

Apr 8, 2026

When Statistical Tests Miss the Point: How Topology Exposes Hidden AI Decision Patterns

Topological data analysis reveals AI decision patterns that traditional statistical methods systematically overlook, exposing critical model behaviors in production systems.

topological-data-analysismodel-behavior-analysisAI-auditing

Apr 7, 2026

Mechanistic Interpretability's Production Problem: Why Cross-Layer Analysis Won't Scale

The most promising interpretability techniques break down when applied to production AI systems at enterprise scale.

mechanistic-interpretabilityproduction-deploymentai-auditing

Apr 6, 2026

The EU AI Act's Documentation Trap: Why Banks Are Building Compliance Theater

The EU AI Act's documentation requirements are pushing banks toward performative compliance that misses the actual risks in their AI systems.

EU AI Actfinancial servicesAI auditing

Mar 31, 2026

The LLM Compliance Gap: Why Model Risk Management Is Fighting the Last War

Traditional compliance frameworks fail catastrophically when applied to LLM-based decisions, requiring fundamentally different audit trail architectures.

AI auditcompliancemodel risk management

Mar 27, 2026

Why Circuit Tracing Changes the Audit Conversation

Traditional model evaluations tell you what a model gets wrong. Circuit tracing tells you why. That distinction matters more than most teams realize.

mechanistic interpretabilitycircuit tracingAI auditing