Built 26/04/17 07:54commit f2be6c7
Affaan Mustafa Claude Code Longform Guide
中文 | English
Summary
This source extends the shorthand setup guide into an advanced operating playbook for Claude Code: session logs, strategic compaction, dynamic system-prompt injection, hook-based memory persistence, eval loops, token-aware delegation, bounded parallelization, and reusable patterns that transfer across agent tools.
Source
- Raw file: raw/anthropic/claude-code/everything-claude-code/The Longform Guide to Everything Claude Code.md
- Translated raw file: raw/anthropic/claude-code/everything-claude-code/The Longform Guide to Everything Claude Code.zh.md
- Original URL: https://x.com/affaanmustafa/status/2014040193557471352
- Author: Affaan Mustafa
- Published: 2026-01-17
- Ingest date: 2026-04-13
Key Contributions
- Treats session state as a durable artifact through
.tmplogs,/learn, and hook-driven persistence instead of relying on one endless thread. - Distinguishes strategic compaction, context resets, and system-prompt injection as different tools for controlling what authority and memory each session carries.
- Makes token optimization operational through model-tiering, subagent benchmarking, CLI-over-MCP substitutions, and explicit cost-awareness.
- Connects verification to staged eval design: cheap graders, transcript reading, pass-rate metrics, and human review all have distinct roles.
- Frames parallelization as a bounded technique with explicit kickoff patterns, worktree isolation, and low-overlap task splits.
Strongest Claims
- Long-running agent work stays coherent when memory, plans, and learnings are externalized into files rather than left implicit in chat history.
- Verification should be designed as a loop with explicit graders and escalation paths, not as a final self-confidence check.
- Reusable patterns compound across model generations more reliably than tool-specific tricks.
- Lazy-loaded MCPs reduce startup context pressure, but CLI wrappers and skills can still win on token cost.