The Problem
If you use Claude Code (or any AI coding assistant) seriously, you've hit these:
Your AI agrees with everything you say. You challenge it, it immediately surrenders. No pushback, no analysis β just "you're right, sorry."
Your AI asks instead of doing. "Want me to fix that?" Just fix it. You told it to.
Your AI forgets things after context compression. Important context evaporates. Old memories pollute new decisions. There's no cleanup.
Your AI keeps making the same mistakes. You correct it, it says "I'll do better," then does the exact same thing tomorrow.
These aren't bugs. They're structural problems with how AI assistants work. And prompts in CLAUDE.md don't fix them β they get ignored after the first compact.
The Fix: Code, Not Prompts
We built three tools that solve these problems with actual enforcement β Python scripts you drop in and forget.
1. Self-Guard (PreResponse Hook)
A hook that intercepts your AI's response before it reaches you and detects bad behavior patterns:
- Mode F: Ask instead of do β Catches "want me to...?", "shall I...?" and injects a correction
- Mode G: Acknowledge without action β User says "change X to Y", AI says "got it" but never touches a file
- Mode E: Sycophancy β User challenges, AI immediately surrenders without evidence
- Passive deferral β "I'll do that tomorrow" with no concrete plan
It's a PreResponse hook, meaning it runs before the response is finalized. The AI gets a system warning and has to fix its own output. Physical enforcement, not wishful thinking.
Fully configurable via JSON β add your own patterns, disable modes you don't care about, works in English and Chinese out of the box.
2. Memory GC (Lifecycle Manager)
Your AI's memories should have an expiry date. This tool manages the full lifecycle:
- Add memories with type, importance, and automatic TTL
- Garbage collect expired memories on schedule
- Deduplicate similar memories (with CJK-aware similarity matching)
- Promote frequently-accessed memories from temporary to permanent
- Validate for contradictions and injection patterns
Default TTLs: context (14 days), preferences (30 days), progress (7 days), insights (21 days). Importance multipliers extend or shorten TTL automatically. Access count extends lifetime.
No vector DB needed. No pip dependencies. One Python file.
3. Pitfall Tracker (Learn From Mistakes)
Track AI mistakes systematically instead of hoping "I'll try harder" works:
The magic is in the escalation logic:
- 3 occurrences -> tagged β this isn't a one-off
- 5 occurrences -> tagged β suggested for CLAUDE.md hard rules
- 8+ occurrences -> β the AI can't fix this alone
Run to auto-generate improvement items from recurring patterns. Track resolution. Measure your AI's actual improvement rate with .
Quick Start
Self-Guard:
Memory GC:
Pitfall Tracker:
Background
These tools were built while running a team of 7+ AI agents 24/7 across multiple terminals. Every tool exists because we hit the problem repeatedly and prompts didn't fix it.
- Self-guard came from an AI that said "want me to...?" hundreds of times despite being told not to
- Memory GC came from context pollution after months of accumulated memories with no cleanup
- Pitfall tracker came from the same mistakes appearing in nightly reflections week after week
If your AI keeps agreeing with you, keeps forgetting things, or keeps making the same mistakes β these tools are for you.
Zero dependencies. Python 3.8+. MIT licensed.
GitHub: permafrost-tools
United States
NORTH AMERICA
Related News
Jeff Bezos Seeking $100 Billion to Buy Manufacturing Companies, 'Transform' Them With AI
7h ago
Officer Leaks Location of French Aircraft Carrier With Strava Run
7h ago
Microsoft Says It Is Fixing Windows 11
7h ago
CBS News Shutters Radio Service After Nearly a Century
7h ago
Firefox Announces Built-In VPN and Other New Features - and Introduces Its New Mascot
7h ago