Skip to content
Article Review Workflow
AutoXXS (320px)XS (375px)SM (640px)MD (768px)LG (1024px)XL (1280px)XXL (1536px)
SketchMaterialiOSTamagui
DataInjectionKeyPatternsServiceTransactionProcessResearchProductQualityPerformanceSpecDomainFunctionTechnologyArchitectureConfigMiddlewareDataDatabaseDrizzleMigrationModelop-sqliteSchemaSQLState ManagementDraftKeystoneMergePatchPatchesPersistenceReactiveRedoStoreUndoTestingDeviceFactoryIsolationTypeScriptZodTopicsCommunicationBidsNVCDesignDesign ImplicationsEducationPedagogyFoundationsPsychologyAttachmentFloodingRelatingAuthentic RelatingUIEditorReact Native

Article Review Workflow

Workflow Article Review Workflow
agent starticle
tags
workflowarticle

Phase 1 β€” Load & Parse

  1. Read the article file: content/articles/{id}/{id}.article.mdoc
  2. If the user provided a review prompt (e.g. β€œreview against clarity” or β€œcheck the citations”), note it as the focus lens β€” it takes priority in the report
  3. Parse the article structure: count and catalogue all tags used

Present a brief parse summary:

ARTICLE LOADED
──────────────────────────────────────────────
id: {id}
title: {title}
date: {date}
status: {draft | published}
sections: {count}
atoms: {count} ({types})
citations: {count}
footnotes: {count}
quotes: {count}
callouts: {count}
carousels: {count}
assets: {count}
focus lens: {user prompt or "general review"}

Phase 2 β€” Review Dimensions

Evaluate the article across these dimensions. If the user specified a focus lens, lead with that dimension and go deeper on it.

D1 β€” Structure & Flow

  • Does the article have a clear thesis stated in the tldr and introduction?
  • Do sections follow a logical progression?
  • Is the heading structure clean (## for sections, minimal ### usage)?
  • Does the conclusion land β€” does it connect back to the thesis?

D2 β€” Knowledge Extraction (Atoms)

  • Are key insights captured as {% atom %} tags?
  • Is each atom correctly typed (definition, hypothesis, learning, etc.)?
  • Do atoms stand alone β€” are they readable without surrounding context?
  • Are hypothesis β†’ learning arcs present where the article tracks evolving understanding?
  • If the source was a transcript: are all definitions, axioms, hypotheses, and predictions captured? This is the most common gap.

D3 β€” Citation Integrity

  • Does every external claim have a {% citation %} and matching {% cite %} marker?
  • Are there β€œweasel phrases” (β€œstudies show”, β€œresearch indicates”) without citations?
  • Are citation keys consistent (author-year format)?
  • Is {% bibliography /%} present at the end?
  • Are there orphaned citations (declared but never cited) or broken cite keys?

D4 β€” Tag Usage Quality

  • Quotes: Are they attributed? Do they anchor arguments or are they decorative?
  • Callouts: Do they highlight genuine implications or are they overused?
  • Footnotes: Are they used for asides (correct) or for citations (incorrect β€” use cite instead)?
  • Carousels: Is the content genuinely parallel or would prose be better?
  • Assets: Do all src paths resolve? Do images have alt text?

D5 β€” Writing Quality

  • Is the prose clear and direct?
  • Are sentences varied in length?
  • Is jargon defined (via atom type=β€œdefinition” or inline)?
  • Does the article respect its audience?

D6 β€” Frontmatter & Conventions

  • Required fields present: type, id, title, status, date
  • date is ISO 8601
  • id is kebab-case
  • Tags are relevant
  • Article tag date matches frontmatter date

Phase 3 β€” Report

Present the review as a structured report:

ARTICLE REVIEW β€” {title}
══════════════════════════════════════════════
VERDICT: {strong | good | needs work | weak}
{If focus lens was specified:}
FOCUS: {lens}
{Detailed findings for the focus dimension}
DIMENSIONS
D1 Structure & Flow {pass | note | issue}
{one-line finding}
D2 Knowledge Extraction {pass | note | issue}
{one-line finding}
D3 Citation Integrity {pass | note | issue}
{one-line finding}
D4 Tag Usage Quality {pass | note | issue}
{one-line finding}
D5 Writing Quality {pass | note | issue}
{one-line finding}
D6 Frontmatter {pass | note | issue}
{one-line finding}
CRITICAL ISSUES ({count})
1. {issue} β€” {suggested fix}
2. ...
SUGGESTIONS ({count})
1. {suggestion}
2. ...
MISSING ATOMS
{List any insights in the prose that should be captured as atoms but aren't}

Verdict scale:

  • strong β€” publish-ready, no critical issues
  • good β€” minor issues only, quick fixes
  • needs work β€” structural or citation gaps, worth revising
  • weak β€” fundamental problems (no thesis, no atoms, unsourced claims)

Phase 4 β€” Fix (optional)

If the user says β€œfix it”, β€œapply the suggestions”, or similar:

  1. Apply all critical issue fixes
  2. Apply suggestions where clearly beneficial
  3. Present the revised draft (do not write to disk)
  4. Wait for user confirmation before writing

If the user only wants specific fixes (β€œjust fix the citations”), apply only those.


Do’s and Don’ts

Do:

  • Keep review read-only unless the user explicitly asks for fixes
  • Report honestly β€” a weak article is weak; the user benefits from knowing
  • Write the full atom tag when suggesting new atoms (not just β€œadd an atom here”)
  • Lead with the focus lens dimension if the user specified one

Don’t:

  • Write to disk during review unless the user explicitly asks for fixes
  • Downgrade an article’s status from published to draft without user confirmation
  • Inflate verdicts β€” be honest about quality
  • Apply fixes the user did not request (if they say β€œjust fix the citations”, only fix citations)

Definition of Done

  • Review report delivered with per-dimension verdicts
  • All critical issues listed with fix suggestions
  • Revised draft presented if user requested fixes