Article Review Workflow
Phase 1 β Load & Parse
- Read the article file:
content/articles/{id}/{id}.article.mdoc - If the user provided a review prompt (e.g. βreview against clarityβ or βcheck the citationsβ), note it as the focus lens β it takes priority in the report
- Parse the article structure: count and catalogue all tags used
Present a brief parse summary:
ARTICLE LOADEDββββββββββββββββββββββββββββββββββββββββββββββid: {id}title: {title}date: {date}status: {draft | published}sections: {count}atoms: {count} ({types})citations: {count}footnotes: {count}quotes: {count}callouts: {count}carousels: {count}assets: {count}focus lens: {user prompt or "general review"}Phase 2 β Review Dimensions
Evaluate the article across these dimensions. If the user specified a focus lens, lead with that dimension and go deeper on it.
D1 β Structure & Flow
- Does the article have a clear thesis stated in the tldr and introduction?
- Do sections follow a logical progression?
- Is the heading structure clean (## for sections, minimal ### usage)?
- Does the conclusion land β does it connect back to the thesis?
D2 β Knowledge Extraction (Atoms)
- Are key insights captured as
{% atom %}tags? - Is each atom correctly typed (definition, hypothesis, learning, etc.)?
- Do atoms stand alone β are they readable without surrounding context?
- Are hypothesis β learning arcs present where the article tracks evolving understanding?
- If the source was a transcript: are all definitions, axioms, hypotheses, and predictions captured? This is the most common gap.
D3 β Citation Integrity
- Does every external claim have a
{% citation %}and matching{% cite %}marker? - Are there βweasel phrasesβ (βstudies showβ, βresearch indicatesβ) without citations?
- Are citation keys consistent (author-year format)?
- Is
{% bibliography /%}present at the end? - Are there orphaned citations (declared but never cited) or broken cite keys?
D4 β Tag Usage Quality
- Quotes: Are they attributed? Do they anchor arguments or are they decorative?
- Callouts: Do they highlight genuine implications or are they overused?
- Footnotes: Are they used for asides (correct) or for citations (incorrect β use cite instead)?
- Carousels: Is the content genuinely parallel or would prose be better?
- Assets: Do all
srcpaths resolve? Do images havealttext?
D5 β Writing Quality
- Is the prose clear and direct?
- Are sentences varied in length?
- Is jargon defined (via atom type=βdefinitionβ or inline)?
- Does the article respect its audience?
D6 β Frontmatter & Conventions
- Required fields present: type, id, title, status, date
dateis ISO 8601idis kebab-case- Tags are relevant
- Article tag
datematches frontmatterdate
Phase 3 β Report
Present the review as a structured report:
ARTICLE REVIEW β {title}ββββββββββββββββββββββββββββββββββββββββββββββ
VERDICT: {strong | good | needs work | weak}
{If focus lens was specified:}FOCUS: {lens} {Detailed findings for the focus dimension}
DIMENSIONS D1 Structure & Flow {pass | note | issue} {one-line finding}
D2 Knowledge Extraction {pass | note | issue} {one-line finding}
D3 Citation Integrity {pass | note | issue} {one-line finding}
D4 Tag Usage Quality {pass | note | issue} {one-line finding}
D5 Writing Quality {pass | note | issue} {one-line finding}
D6 Frontmatter {pass | note | issue} {one-line finding}
CRITICAL ISSUES ({count}) 1. {issue} β {suggested fix} 2. ...
SUGGESTIONS ({count}) 1. {suggestion} 2. ...
MISSING ATOMS {List any insights in the prose that should be captured as atoms but aren't}Verdict scale:
- strong β publish-ready, no critical issues
- good β minor issues only, quick fixes
- needs work β structural or citation gaps, worth revising
- weak β fundamental problems (no thesis, no atoms, unsourced claims)
Phase 4 β Fix (optional)
If the user says βfix itβ, βapply the suggestionsβ, or similar:
- Apply all critical issue fixes
- Apply suggestions where clearly beneficial
- Present the revised draft (do not write to disk)
- Wait for user confirmation before writing
If the user only wants specific fixes (βjust fix the citationsβ), apply only those.
Doβs and Donβts
Do:
- Keep review read-only unless the user explicitly asks for fixes
- Report honestly β a weak article is weak; the user benefits from knowing
- Write the full atom tag when suggesting new atoms (not just βadd an atom hereβ)
- Lead with the focus lens dimension if the user specified one
Donβt:
- Write to disk during review unless the user explicitly asks for fixes
- Downgrade an articleβs status from
publishedtodraftwithout user confirmation - Inflate verdicts β be honest about quality
- Apply fixes the user did not request (if they say βjust fix the citationsβ, only fix citations)
Definition of Done
- Review report delivered with per-dimension verdicts
- All critical issues listed with fix suggestions
- Revised draft presented if user requested fixes