Why
The rationale behind every architectural decision. Each entry traces back to one of the seven Strategic Goals or to the Core Value (single, stable, machine-readable, citable reference).
If you're tempted to change something on this list, read the rationale first. Most have a load-bearing reason, even when the reason isn't obvious.
Decisions traceable to "single source of truth"
Zod is the only schema
schemas/source.ts defines every front-matter contract. The CMS config, the JSON Schema for CI, and the runtime validator all derive from it via pnpm codegen. CI fails on drift.
Why not Sanity, Contentful, or another CMS-with-its-own-schema? Vendor lock-in. The methodology is supposed to outlast any tool. Plain Markdown + Zod survives any CMS swap (Sveltia → Decap → Tina → hand-edit, all use the same config.yml shape).
Slugs are immutable
Once a concept is published at /concepts/company-strategy/purpose, that URL is forever. Renames require a redirect or a tombstone (410 page).
Why not allow renames freely? Citations break. The whole point of this site is that academic papers and AI agents can pin a URL and trust it. The Phase 6 URL-stability CI test seeds from scripts/manifest.json and checks every historical URL on every PR.
Decisions traceable to "machine-readable"
/aom.json exists as a static file, not a runtime API
plugins/generate-aom-json/index.ts walks docs/** at loadContent and writes build/aom.json at postBuild. No server, no caching layer, no rate limit.
Why not a JSON API on a Worker? The data changes only when content changes (rare, monthly at most). Static files served via Cloudflare's CDN are 10× faster than any cold-start Worker, free at any scale, and impossible to DOS. Trading dynamism for resilience is the right call when the underlying data is itself static.
Content negotiation via Cloudflare Pages Functions, not a separate /api namespace
functions/_middleware.ts rewrites Accept: text/markdown requests to the .md sibling. The canonical URL stays the same.
Why not /api/concepts/<slug> style endpoints? API surface explodes. Citation URLs would diverge from "the page" URLs. Content negotiation lets the same URL serve HTML to humans and Markdown to AI agents — that's the whole point of HTTP Accept headers.
.md mirrors are pre-built, not server-rendered
The @signalwire/docusaurus-plugin-llms-txt plugin emits one .md file per doc URL at build time. 105 mirrors total.
Why not Pandoc on the fly? Build-time mirrors are bytewise stable across requests, citation-stable, and don't need any runtime cost. Plus they cache perfectly at the edge.
Decisions traceable to "citable"
CITATION.cff at the repo root, not just in the docs
GitHub renders a "Cite this repository" widget when CITATION.cff lives at the root. Educators see it without scrolling.
Per-page "Cite this page" button copies APA/MLA/Chicago/BibTeX
src/components/CiteThisPage.tsx generates each format client-side from the page metadata + customFields.specVersion.
Why client-side? Avoids a server, works for any concept page without per-page setup, and the citations stay in sync with the page automatically (URL, title, version) without an out-of-band data file to maintain.
schema.org/DefinedTerm per concept + DefinedTermSet per hat
plugins/inject-defined-term/index.ts injects JSON-LD via JSDOM at postBuild. Google + Bing + ChatGPT browse all read this.
Why postBuild + JSDOM, not injectHtmlTags? injectHtmlTags is site-wide. routesBuildMetadata only exposes noIndex. Per-page payloads need actual HTML rewriting; JSDOM is the canonical pattern (verified against @stackql/...-structured-data's source).
Decisions traceable to "open"
Content under CC BY 4.0, code under MIT, trademark stays with Ne-Lo
Three separate licenses because they protect three separate things. Founders can fork the code freely (MIT) and use the methodology freely (CC BY) but can't brand a derivative course as "Anatomy of Marketing" (trademark).
No signup walls on core spec content
Strategic Goal: "every concept page, every .md mirror, /llms.txt, /aom.json, MCP queries — all reachable without registration." Asserted by an automated CI test in Phase 6.
Why? The whole AI-agent strategy depends on URLs being reachable. Putting concept pages behind a signup wall would mean ChatGPT browse can't read them, which kills the differentiator.
Lead capture only on richer-format bonus downloads
Slide decks, expandable hat-map images, courseware bundles, workshop kits live in static/downloads/gated/ and have a registration form. Concept pages don't.
Why split? Some founders need the lead-capture data for "used by [companies]" social proof. The split keeps that working without polluting the open-spec content with auth gates.
Cloudflare Pages free tier, not Vercel Pro / Mintlify hosted
CLAUDE.md is explicit: "no SaaS lock-in." Cloudflare's free tier covers our needs and the workflow is just static-files-pushed-to-an-edge-CDN — portable to any other host with wrangler pages deploy swapped for the equivalent.
Decisions traceable to "founder-editable"
Sveltia (or Decap fallback) at /admin, not a custom CMS
Both Sveltia and Decap consume the same config.yml we generate from Zod. Either works. The CMS swap is one CDN URL change in src/pages/admin.tsx.
Why a third-party CMS? Building a custom CMS is months of work and the GitHub-OAuth + create-PR-on-save behaviour is exactly what we need. Sveltia is the modern fork of Decap with better Markdown previewing.
Editorial workflow → PR → merge, never direct push to main
CMS-05. Saves create branches; reviewers merge.
Why? A founder making a typo can't break the live site. The PR queue is also a natural review surface (Ne-Lo can comment, ask follow-ups, batch-merge a week's edits).
Slug field marked immutable in the CMS
CMS-04 + CONT-12. Editors literally can't change a URL.
Why? See "Slugs are immutable" above. The CMS-side guarantee is the first line of defense; the build-time onBrokenLinks: 'throw' is the second.
Decisions traceable to "AI-agent-native"
Sibling MCP repo, not embedded in the wiki repo
the-kizz/aom-mcp is its own pnpm package with its own GitHub repo.
Why? MCP servers have a different lifecycle (install locally, not deploy). Different licence concerns (the AoM content is CC BY 4.0; the MCP server code is MIT). Different release cadence. Co-locating would couple the two without benefit.
MCP read-only HTTPS-fetch, not stdio + filesystem
The MCP server fetches /aom.json over HTTPS. It doesn't exec, doesn't spawn, doesn't read local files outside its own dir.
Why? Eliminates the CVE-2026-30623 stdio-command-injection class of vulnerabilities structurally. A user installing the MCP into Claude Desktop is trusting Ne-Lo not to ship malicious tool code; the architecture removes whole classes of "tool calls user-controllable strings to a shell" risks.
MCP not published to npm
CLAUDE.md is explicit. The repo says "scaffold" — founders publish when they're ready.
Why? Premature publication = premature support obligation. Better to ship as install-from-source until usage signals demand a registry release.
Decisions traceable to "build-time enforcement"
Three-layer validation
- CMS field constraints (Sveltia widget — write-time)
- Zod runtime validator in CI (
pnpm validate— commit-time) - Docusaurus front-matter parse hook (build-time)
All three derive from schemas/source.ts. Editor mistakes get caught at the earliest possible layer.
onBrokenLinks: 'throw' (and Anchors, MarkdownLinks)
Build fails if any internal link or anchor is wrong.
Why? Citations break otherwise. A reader pinning /concepts/.../purpose in their syllabus deserves the URL to keep working — and the only way to guarantee that is to fail loudly on every PR that breaks one.
axe-core CI on every PR
axe.yml runs Playwright + axe-core over a 4-page sample × 2 colour modes + 360px viewport. Fails on any WCAG 2.1 AA violation.
Why? Educators in many countries (especially the EU) need to ensure compliance for their syllabi. Catching regressions before they ship is cheaper than fixing them post-deploy.
Decisions deliberately deferred
| Item | Why deferred |
|---|---|
Production cutover to wiki.anatomyofmarketing.org | Requires DNS + research on subdomain choice (vs spec., docs., root); .dev works for v1 |
| Mermaid prerender plugin | The current plugin is 1-star and has no releases — wait for it to mature |
| Concept-graph visualisation (D3 / React Flow / Cytoscape) | Needs inputs[]/outputs[]/related[] front-matter populated by editors — Phase 5+ work |
| DOI registration | Requires institutional partner; v2 backlog |
| Embeddings-based search in MCP | BM25 is fine for 60 concepts; revisit if corpus grows 10× |
Each deferral is a deliberate choice, not an oversight. If you're considering pulling one of these forward, check the relevant Strategic Goal first — most of these don't move that needle yet.