Confluence Templates & Agent Documentation Patterns
The Problem with Agent Documentation
Section titled “The Problem with Agent Documentation”AI agents generate a lot of documentation — research reports, design documents, decision records, sprint plans. The question isn’t whether agents can write docs. They can, prolifically. The question is whether those docs are findable, consistent, and useful six months later.
Without structure, agent-generated documentation drifts into the same failure modes as human-generated documentation: inconsistent formatting, missing metadata, no discoverability, and no connection between related artifacts. An agent writes a research report, stores it somewhere, and three weeks later another agent (or the same agent in a new session) can’t find it because there’s no index, no labels, and no standard location.
The solution isn’t more documentation. It’s documentation infrastructure — templates that enforce structure, labeling conventions that enable search, and clear decisions about where different artifact types live.
Two-Layer Architecture
Section titled “Two-Layer Architecture”After evaluating several approaches, the team adopted a two-layer documentation architecture:
Layer 1: Git workspace (agent-primary) Research reports, daily memory files, skill documentation, and working notes live in the agent’s workspace as markdown files. Agents have direct filesystem access — reads are instant, writes are versioned via git, and the content is available without API calls. This is where agents do their thinking.
Layer 2: Confluence (human-primary) Design documents, decision records, sprint plans, and cross-agent reference material live in Confluence. These are the artifacts that humans browse, review, and approve. Confluence provides structured search (CQL), page properties for metadata, and a visual hierarchy that filesystem directories can’t match.
The key decision: research stays in git, Confluence gets an index page with summaries and links. This avoids the duplication drift problem — one authoritative copy in git, one lightweight pointer in Confluence. Full Confluence pages are reserved for artifacts that benefit from browser-based reading: design docs with diagrams, decision records for stakeholder review, sprint plans with status tracking.
DIY Templates
Section titled “DIY Templates”Confluence’s native template system requires admin-level API access that isn’t available through the MCP Atlassian server (authentication is locked in the MCP pod’s service account). Rather than escalating permissions, the team built a DIY template system using standard Confluence pages with placeholder syntax.
How It Works
Section titled “How It Works”Four template pages live in the FAD (Fiduciary Agent Documentation) Confluence space:
| Template | Purpose | Placeholder Fields |
|---|---|---|
| Design Document | System architecture and implementation plans | {{TITLE}}, {{STATUS}}, {{AUTHORS}}, {{DATE}}, {{CONTEXT}}, {{DECISION}}, {{RATIONALE}}, {{ALTERNATIVES}}, {{CONSEQUENCES}} |
| Research Document | Investigation reports and findings | {{TITLE}}, {{QUESTION}}, {{METHODOLOGY}}, {{FINDINGS}}, {{RECOMMENDATIONS}}, {{SOURCES}} |
| Decision Record | Architectural and operational decisions | {{TITLE}}, {{STATUS}}, {{CONTEXT}}, {{DECISION}}, {{RATIONALE}}, {{CONSEQUENCES}} |
| Sprint Plan | Work planning and task tracking | {{SPRINT_NAME}}, {{GOAL}}, {{TASKS}}, {{RISKS}}, {{RETROSPECTIVE}} |
Each template includes a YAML schema block that documents the expected fields, their types, and whether they’re required or optional. This schema serves double duty: it tells agents what data to provide, and it tells humans reviewing the output what to expect.
Template Workflow
Section titled “Template Workflow”1. Agent reads template: confluence_get_page(template_id)2. Agent extracts content: Parse page body, identify {{FIELD}} placeholders3. Agent fills placeholders: Replace each {{FIELD}} with actual content4. Agent creates page: confluence_create_page(space, title, body, parent_id)5. Agent adds labels: confluence_add_label(page_id, labels[])This workflow runs entirely through the MCP Atlassian server — no direct API calls, no special permissions, no browser automation. The agent reads a template, fills it in, and publishes a properly structured page.
Why DIY Over Native Templates
Section titled “Why DIY Over Native Templates”Native Confluence templates would be cleaner — create from template, fill fields, done. But the MCP Atlassian server authenticates as a service account that doesn’t have template management permissions. Options:
- Escalate permissions — Give the service account admin access. Rejected: violates principle of least privilege. The service account should read/write pages, not manage space configuration.
- Build a custom API bridge — Write a service that translates template requests into admin API calls. Rejected: over-engineering for 4 templates that change rarely.
- DIY with placeholder pages — Store templates as regular pages, use string replacement. Accepted: simple, auditable, works with existing permissions, and agents already understand text substitution.
The DIY approach trades elegance for pragmatism. It works, it’s debuggable (you can read the template page in a browser), and it doesn’t require infrastructure changes.
Labeling Taxonomy
Section titled “Labeling Taxonomy”Confluence labels make pages discoverable via CQL (Confluence Query Language). Without labels, finding a specific design document means browsing the page hierarchy manually. With labels, agents can query: “find all decision records related to infrastructure.”
Standard Labels
Section titled “Standard Labels”| Category | Labels | Purpose |
|---|---|---|
| Type | design-doc, research, decision-record, sprint-plan, index | What kind of document |
| Domain | infrastructure, protocol, household, finance, portfolio, documentation, coordination | What area it covers |
| Agent | agent:fiducian, agent:alec, agent:claude-code | Who created/owns it |
| Status | draft, active, superseded, archived | Document lifecycle |
CQL Queries
Section titled “CQL Queries”Labels enable structured search that agents use programmatically:
All active decision records:
space = "FAD" AND label = "decision-record" AND label = "active"Infrastructure documents by a specific agent:
space = "FAD" AND label = "infrastructure" AND label = "agent:fiducian"All research indexes (for the git-primary lookup pattern):
space = "FAD" AND label = "research" AND label = "index"These queries are significantly more reliable than keyword search for structured lookups. An agent looking for “the design document about the cybersecurity scanner” can query by type and domain labels rather than hoping the word “cybersecurity” appears in the title.
Confluence Databases
Section titled “Confluence Databases”Confluence Databases (available on Free tier) provide structured tabular data with field types, filtering, and sorting — essentially a lightweight spreadsheet embedded in Confluence.
Use cases for the agent team:
- Agent capability registry — Which agent has which tools, access levels, and skill layers
- Research index — Title, date, author, git path, summary for all research reports (replacing manual index pages)
- Sprint task tracker — Lightweight alternative to Jira board views for weekly review meetings
Databases complement the labeling system: labels tell you what kind of thing a page is, databases give you structured data about things in a queryable format.
Jira Integration
Section titled “Jira Integration”The documentation system connects to Jira through bidirectional links:
- Jira → Confluence: Issues reference their Confluence deliverable page (design doc, decision record) in the description or a comment
- Confluence → Jira: Confluence pages include the originating Jira issue key for traceability
- Labels mirror components: Jira components (
Infrastructure,Protocol,Household) map to Confluence domain labels (infrastructure,protocol,household)
This linkage ensures that when someone asks “what was decided about X?”, the path from Jira issue → Confluence decision record → git research file is navigable in both directions.
Lessons Learned
Section titled “Lessons Learned”Start with the taxonomy, not the templates. The labeling conventions were more impactful than the template structure. A poorly labeled page in a good template is harder to find than a well-labeled page in no template. Define your labels first, enforce them consistently, add templates for formatting consistency later.
Git-primary for agent work, Confluence for human consumption. Duplicating research into Confluence creates maintenance burden with no audience. Agents read files; humans browse Confluence. Serve each audience where they already are.
DIY templates are good enough. The placeholder approach lacks input validation, field-type enforcement, and auto-formatting that native templates provide. None of that has mattered in practice. Agents are disciplined enough to fill templates correctly (they follow instructions literally), and the rare formatting issue is caught in human review.
100 automation runs/month is tight. Jira Automation on Free tier caps at 100 rule executions per month. With 3 agents creating and transitioning issues, automation rules for auto-assignment and Discord webhooks consume runs quickly. Monitor usage before relying on automation-heavy workflows.
Technology Stack
Section titled “Technology Stack”- Documentation platform: Atlassian Confluence (Free tier, FAD space)
- Access method: MCP Atlassian server via MCP Gateway
- Template storage: Regular Confluence pages with
{{FIELD}}placeholders - Query language: CQL (Confluence Query Language) for label-based search
- Source control: Git workspace for research artifacts and working documents
- Issue tracking: Jira (FAD project) with bidirectional Confluence links