Built 26/04/17 06:04commit 511b6d9
Architecture
中文 | English
This page summarizes the current llm-wiki operating model as a local-first knowledge system with an automated maintenance loop and a static publishing pipeline.
Open the standalone architecture diagram
Summary
- Capture starts locally: Chrome with Obsidian Web Clipper converts web pages into markdown and stores them under
raw/. - The repository follows the Karpathy-style LLM knowledge base pattern:
raw/is the source layer andwiki/is the maintained synthesis layer. - Codex agents act as the maintenance engine: they ingest new sources, update durable wiki pages, run lint passes, preserve bilingual siblings, and commit the resulting changes.
- GitHub is the canonical remote. After local commits, pushes propagate the updated vault to the shared source of truth.
- VitePress renders the repository into a readable site, and Vercel deploys it automatically from GitHub so the wiki is available from desktop and mobile browsers.
Flow
- Web content or notes enter the vault through
raw/. - Codex maintenance sessions integrate the new material into
wiki/sources/,wiki/topics/,wiki/concepts/, andwiki/answers/. - The repository history remains intentional and inspectable through Git.
- GitHub push triggers Vercel deployment of the VitePress site.
- Readers access the same knowledge base through a public URL in any browser.
Design Goal
The core idea is to keep the knowledge base editable as plain markdown locally, while making maintenance increasingly automatic and publishing essentially zero-touch after each push.