API Audit Workflow
The API Audit workflow treats actions, events, and operations as the primary lens on a specification. Every system behavior is either an action (mutates state, emits event), an operation (reads state), or a policy (reacts to event, fires action). Gaps in these chains are spec gaps.
Four modes, each building on the previous:
| Mode | Command | What it does |
|---|---|---|
| List | api list [focus] | Cross-domain inventory of all actions, events, operations, policies |
| Audit | api audit [focus] | Inventory + connectivity analysis — finds missing events, orphan events, dangling policies, cycles |
| Derive | api derive [focus] | Audit + scan flows, surfaces, requirements, stories for implied API items not yet declared |
| Apply | api apply <report> | Read an .api-report.md file and write proposed items into domain files |
Focus parameter (optional): any document reference — domain/session, feature/feedback, flow/submit-feedback. When omitted, the full system is analyzed.
- For
listandaudit: filter the inventory to matching domains and features connected to the focus. - For
derive: scan only the specified scope for implied actions. - For
auditscoped to a feature: pull in all connected domains — cross-domain boundary gaps live there.
Output: docs/{scope}.api-report.md — scope names: full-system, domain--session, feature--feedback, etc.
Derivation Principles
These rules govern how API items are named and classified throughout the workflow. They apply to every mode.
| Principle | Right | Wrong |
|---|---|---|
| Name from domain noun, not UI element | submit-feedback | submit-form |
| Events mirror domain-scoped actions (past tense) | feedback-submitted | form-submitted |
| Recognize system-level actions as cross-cutting | send-notification in system domain | send-notification in feedback domain |
| Operations name the retrieval, not the display | get-feedback-summary | show-dashboard |
| Flow steps name the what, surface names the where | rate-response (from flow step) | click-star-widget (from surface) |
Action → Event naming derivation:
| Action pattern | Event pattern |
|---|---|
create-X | X-created |
update-X | X-updated |
delete-X | X-deleted |
submit-X | X-submitted |
approve-X | X-approved |
reject-X | X-rejected |
start-X | X-started |
complete-X | X-completed |
cancel-X | X-cancelled |
For verbs not in this table, apply the general rule: convert the imperative verb to its past participle and swap it to the end — archive-session → session-archived, rate-response → response-rated.
Phase 1 — Build Inventory
Collect every declared API item across all domains (or within focus scope).
Data sources:
- Primary:
get_document("aggregate/api")for the compiled API bundle - Fallback: read
content/domains/*.domain.mdocfiles and extract{% api %}blocks
For each domain, record:
| Field | Source |
|---|---|
| Actions | {% action %} tags — id, description, properties |
| Events | {% event %} tags — id, description, properties |
| Operations | {% operation %} tags — id, type (query/command), description |
| Errors | {% error %} tags — id, code, description |
| Policies | {% policy %} tags — id, source (event ref), reaction (action ref) |
Build a flat index and output it:
INVENTORY — {scope}──────────────────────────────────────────domain/session action start-session → session-started (policy: auto-log-session) action end-session → session-ended (policy: auto-transcribe) event session-started ← start-session event session-ended ← end-session operation get-session query error session-not-found 404
domain/feedback action submit-feedback → feedback-submitted event feedback-submitted ← submit-feedback (no policy) operation get-feedback-summary queryFor api list mode: write inventory to report and stop after Phase 4 (Write Report).
Phase 2 — Connectivity Analysis
Build a behavioral graph from the inventory and check for gaps.
2a — Unpaired actions
Actions that have no corresponding event. For each, derive the expected event name using the naming derivation table.
| Check | Rule |
|---|---|
| Action has no event | Expected: {object}-{past-participle} derived from action id |
| Action references an event that is not declared | The action implies an event name that does not exist as an {% event %} tag |
2b — Orphan events
Events with no triggering action. These are either:
- External triggers — legitimate events from outside the system boundary
- Gaps — an action should exist but was never declared
Record both possibilities. The user decides which case applies.
2c — Dead-end events
Events that no policy reacts to. Not necessarily wrong — terminal events are valid (e.g. a final audit event). Flag them so the user can confirm they are intentionally terminal.
2d — Dangling policies
Policies whose source or reaction attributes reference an undeclared event or action. These are broken links in the behavioral chain.
2e — Cross-domain boundary analysis
Events emitted in domain A that domain B logically should react to — based on feature or flow context linking both domains. Cross-domain policies are often missing because authors focus on one domain at a time.
Detect these by:
- Finding features that link multiple domains (via
domains=attribute) - Checking whether events in domain A that are referenced in shared flows have policies in domain B
2f — Policy chains
Trace reactive chains: event → policy → action → event → policy → action → …
| Check | Rule |
|---|---|
| Chain depth | Report chains deeper than 3 as a warning |
| Cycles | Flag circular chains (A → B → C → A) as errors |
A cycle means the system would loop infinitely on a single trigger. Always flag these — they are specification errors that must be resolved.
2g — System-level actions
Identify actions that are cross-cutting concerns rather than domain-specific:
| Pattern | Examples |
|---|---|
| Notification | send-notification, send-email, push-alert |
| Audit / logging | write-audit-log, record-metric |
| Integration | sync-to-external, publish-webhook |
| Scheduling | schedule-job, enqueue-task |
If these currently live inside a domain, flag this as a potential modeling issue — they may belong in a dedicated system or infrastructure domain.
For api audit mode: write inventory + connectivity analysis to report and stop after Phase 4 (Write Report).
Phase 3 — Derive Candidates
Scan non-API artifacts for implied actions, events, operations, and policies that are not yet declared in any {% api %} block.
3a — From flows
Read all content/flows/*.flow.mdoc files (or focus-scoped subset). For each {% step %}:
| Step pattern | Derived item |
|---|---|
| ”{Actor} {verb} {object}” — mutating | Action: {verb}-{object} in kebab-case imperative |
| ”{Actor} {verb} {object}” — reading | Operation: get-{object} or list-{objects} |
| ”System {reacts/automatically}“ | Policy candidate |
| Step outcome / postcondition | Event: {object}-{past-participle} |
Cross-reference each derived item against the inventory. Only report items not already declared.
3b — From surfaces
Read all content/surfaces/*.surface.mdoc files (or focus-scoped subset). Surface elements imply behavior:
| Surface element | Derived item |
|---|---|
| Button / submit action | Action — named from the domain context, not the button label |
| Data display / list view | Operation — get-{entity} or list-{entities} |
| Form fields | Properties on the associated action |
| Real-time indicators | Event subscription — the event that feeds the indicator |
The surface tells you where the action is triggered; the domain tells you what it is. A “Submit” button on a feedback form → submit-feedback, not submit-form.
3c — From requirements and acceptance criteria
Read all content/features/*.feature.mdoc files. Scan {% requirement %} and {% criterion %} blocks:
| Pattern | Derived item |
|---|---|
| ”Users can {verb} {object}“ | Action or operation depending on whether it mutates state |
| ”System must {verb} when {condition}“ | Policy: source = condition event, reaction = verb action |
| ”Given {state} when {trigger} then {outcome}“ | Action (trigger) + Event (outcome) + Policy (if reactive) |
| “{Object} must be {constraint}“ | Requirement constraint — not an API item, skip |
3d — From stories and feature descriptions
Read content/stories/*.story.mdoc and feature {% tldr %} blocks:
| Pattern | Derived item |
|---|---|
| ”As a {role}, I want to {verb} {object}“ | Action: {verb}-{object} |
| ”so that {outcome}“ | Event if the outcome triggers further behavior |
| Feature scope description mentioning capabilities | Actions or operations not yet in connected domains |
3e — Propose policies for derived items
For each derived action → event pair, check whether the event should trigger downstream behavior:
- Does any flow continue after this event?
- Does any requirement reference a reaction to this event?
- Does another domain logically care about this event?
If yes, propose a policy with source = the derived event and reaction = the implied downstream action.
3f — Compile candidate list
Group all derived candidates by target domain. For each candidate, record:
| Field | Description |
|---|---|
| Type | action / event / operation / policy |
| Proposed id | Following naming conventions |
| Source | The artifact and specific element it was derived from (e.g. flow/submit-feedback step 3) |
| Domain | Which domain it belongs to |
| Counterpart | For actions: the proposed event. For events: the triggering action. For policies: source + reaction |
Phase 4 — Write Report
Write the output to docs/{scope}.api-report.md.
Scope naming:
- No focus →
full-system domain/session→domain--sessionfeature/feedback→feature--feedback
The report follows this structure. Omit any section that has zero items.
# API Report — {scope}
**Date:** {ISO 8601 date}**Focus:** {all | document reference}**Mode:** {list | audit | derive}
---
## Inventory
### domain/{id}
| Type | Id | Counterpart | Policy ||------|-----|-------------|--------|| action | submit-feedback | → feedback-submitted | — || event | feedback-submitted | ← submit-feedback | → notify-admin || operation | get-feedback | — | — || error | feedback-not-found | — | — |
**Coverage:** {n} actions, {m} events, {p} operations, {q} errors, {r} policies
(Repeat for each domain in scope)
---
## System-Level Actions
Cross-cutting actions that are not domain-specific:
| Action | Current Domain | Suggested Domain | Rationale ||--------|---------------|-----------------|-----------|| send-notification | feedback | system | Cross-cutting notification concern |
---
## Connectivity Analysis
### Unpaired Actions
Actions with no corresponding event:
| Domain | Action | Expected Event ||--------|--------|---------------|| session | archive-session | session-archived |
### Orphan Events
Events with no triggering action:
| Domain | Event | Possible Source ||--------|-------|----------------|| ... | ... | External trigger or missing action |
### Dead-End Events
Events with no policy reacting (confirm intentionally terminal):
| Domain | Event | Potential Reaction ||--------|-------|--------------------|| ... | ... | ... |
### Dangling Policies
Policies referencing undeclared items:
| Domain | Policy | Missing Item ||--------|--------|-------------|| ... | ... | event or action not declared |
### Cross-Domain Gaps
Events in domain A that domain B should react to:
| Source Domain | Event | Target Domain | Suggested Policy ||--------------|-------|--------------|-----------------|| ... | ... | ... | ... |
### Policy Chains
| Chain | Depth | Status ||-------|-------|--------|| feedback-submitted → notify-admin → notification-sent → log-delivery | 3 | ✓ || order-placed → … → order-placed | 3 | ⚠ CYCLE |
---
## Derived Candidates
### From Flows
| Source | Step | Type | Proposed Id | Counterpart | Domain ||--------|------|------|-------------|-------------|--------|| flow/submit-feedback | step 3 | action | rate-response | → response-rated | feedback |
### From Surfaces
| Source | Element | Type | Proposed Id | Domain ||--------|---------|------|-------------|--------|| surface/feedback-form | submit button | action | submit-feedback | feedback |
### From Requirements & Acceptance Criteria
| Source | Requirement / Criterion | Type | Proposed Id | Domain ||--------|------------------------|------|-------------|--------|| feature/feedback req:searchable | "users can search feedback" | operation | search-feedback | feedback |
### From Stories & Feature Descriptions
| Source | Pattern | Type | Proposed Id | Domain ||--------|---------|------|-------------|--------|| story/user-feedback | "I want to rate responses" | action | rate-response | feedback |
### Proposed Policies
| Source Event | Reaction Action | Proposed Policy Id | Domain | Rationale ||-------------|----------------|-------------------|--------|-----------|| response-rated | update-feedback-score | auto-update-score | feedback | Flow continues after rating |
---
## Summary
- **Inventory:** {n} items across {m} domains- **Unpaired actions:** {a}- **Orphan events:** {b}- **Dead-end events:** {c}- **Dangling policies:** {d}- **Cross-domain gaps:** {e}- **Policy chains:** {f} (max depth: {g}, cycles: {h})- **System-level actions:** {i}- **Derived candidates:** {j} ({k} actions, {l} events, {p} operations, {q} policies)Phase 5 — Apply
Trigger: api apply <report-file> where <report-file> is a .api-report.md path.
Read the report file. Present a summary of all actionable items grouped by target domain and ask the user to confirm before writing any changes.
5a — Unpaired actions
For each unpaired action, create the missing {% event %} tag in the action’s domain {% api %} block. Derive the event name from the action using naming conventions.
5b — Derived candidates
For each derived candidate:
- Actions: Add
{% action %}to the target domain’s{% api %}block - Events: Add
{% event %}to the target domain’s{% api %}block - Operations: Add
{% operation %}to the target domain’s{% api %}block
5c — Proposed policies
For each proposed policy, add {% policy %} with structured source and reaction attributes to the target domain.
5d — System-level actions
If the report recommends relocating cross-cutting actions to a system domain:
- Check if the target domain exists — if not, note it needs to be created first (hand off to
new domainworkflow) - Move the action and its related events and policies from the current domain to the target domain
- Update any policy references that pointed to the moved items
Apply rules
- Read each target file in full before editing — never edit blindly
- Apply all changes for a single file in one pass
- Present a summary of planned changes before writing — the user must confirm
- After writing, run
get_system_status()to verify no new lint errors were introduced - Report what was applied and what was skipped
Applied — {n} items written across {m} domains
domain/session — added 2 events (session-archived, session-exported) domain/feedback — added 1 action (rate-response), 1 event (response-rated), 1 policy (auto-update-score)
Skipped: {k} items (user declined or target domain missing)Do’s and Don’ts
Do:
- Derive action and event names from domain semantics, not UI mechanics
- Recognize system-level cross-cutting actions and separate them
- Trace policy chains to detect cycles and excessive depth
- Record the source artifact for every derived candidate
- Propose policies alongside derived action → event pairs
- Omit empty sections from the report
- Read every file before editing during apply
- Present all changes for user confirmation before writing
Don’t:
- Name actions after UI elements (
submit-form,click-button,show-panel) - Name actions after system-level generics when a domain-specific name exists
- Invent API items not supported by any source artifact
- Auto-apply changes without user confirmation
- Silently skip cross-domain gaps — these are often the most important findings
- Treat dead-end events as errors — they may be intentionally terminal
- Trace policy chains deeper than 3 without flagging
Definition of Done
- All declared API items inventoried and indexed (list mode)
- Connectivity analysis complete — all gap types checked (audit mode)
- Policy chains traced with cycles flagged (audit mode)
- Source artifacts scanned for derived candidates (derive mode)
- Policies proposed for derived action → event pairs (derive mode)
- Report written to
docs/{scope}.api-report.md - Summary with counts displayed to user
- All applied changes confirmed by user before writing (apply mode)
- Post-apply lint check returns no new errors (apply mode)