HeadlinesBriefing favicon HeadlinesBriefing.com

CC-Canary: Open-Source Tool Detects AI Model Drift in Claude Code Sessions

Hacker News •
×

Developers now have a new open-source tool to monitor AI model drift in Claude Code sessions. CC-Canary, developed by Delta-HQ, analyzes existing JSONL logs stored locally on users' machines to detect early signs of regression in AI behavior. The tool operates entirely offline, requiring no network connectivity or cloud accounts, and generates detailed forensic reports in markdown or HTML formats.

The system tracks critical metrics including read:edit ratios, cost per session, and reasoning loop frequency across specified time windows (7d–180d). By comparing model outputs before and after potential inflection points, it identifies deviations in token usage, tool call patterns, and thinking depth. Reports include color-coded verdicts (HOLDING/SUSPECTED/CONFIRMED REGRESSION) and visual trend charts showing metric shifts. Installation via `npx skills add delta-hq/cc-canary` requires only Python 3.8+ and runs in ~20 seconds per session.

Privacy is prioritized through local processing – no data leaves users' devices. The tool excludes subagent sessions by default and truncates sensitive user prompt details. Reports contain 15+ technical appendices analyzing aspects like premature stops and word-frequency shifts. The name references historical coal mine canaries that detected toxic gases, symbolizing its role in identifying AI behavior anomalies.

Currently in pre-alpha (0.x), the project welcomes contributions via GitHub. Its MIT-licensed codebase focuses on practical engineering rather than theoretical research, offering developers actionable insights into Claude Code's evolving performance. The tool's ability to detect drift without altering existing workflows makes it particularly valuable for teams maintaining long-term AI projects.