Built 26/04/16 13:13commit cc1e88d
How Claude Code Builds a System Prompt
中文 | English
Summary
This source is Daniel Breunig's analysis of how Claude Code assembles its system prompt from many conditional components. Its main value for this vault is that it explains the assembly logic behind a modern coding-agent prompt stack, which makes it a strong companion to prompt-inventory sources like Piebald-AI/claude-code-system-prompts.
Source
- Raw file: raw/dbreunig/how-claude-code-builds-a-system-prompt.md
- Translated raw file: raw/dbreunig/how-claude-code-builds-a-system-prompt.zh.md
- Original URL: https://www.dbreunig.com/2026/04/04/how-claude-code-builds-a-system-prompt.html
- Ingest date: 2026-04-16
Key Contributions
- Explains that Claude Code's system prompt is a dynamic assembly process rather than one static hidden string.
- Maps a wide range of conditional prompt components, including tool policy, coding norms, communication style, subagent guidance, skills behavior, memory logic, MCP instructions, scratchpad rules, git context, and configurable suffixes.
- Clarifies that the final agent context includes more than the system prompt alone: tool definitions, user content, conversation history, attachments, and skills are all part of the operating surface.
- Gives a more architectural view than prompt-dump repositories by focusing on how sections are selected, omitted, or varied depending on runtime conditions.
- Strengthens the broader argument that harness behavior is shaped by context engineering and orchestration logic as much as by the base model.
Practical Implications
- Prompt inventories and prompt-assembly analyses should be read together: one shows the pieces, the other shows how the pieces become a live runtime context.
- Agent behavior depends heavily on conditional harness logic, not only on whatever final system prompt string happens to be visible in one session.
- Skills, MCP instructions, memory systems, tool definitions, and git/runtime metadata all belong in the same analysis frame when reasoning about coding-agent behavior.
- Reverse-engineering or leaked-source analyses can expose operational truths that product docs flatten or abstract away.