Skip to content

Project Planning & Execution Framework

This document defines a reproducible "Plan-First" workflow designed to ensure high-quality software delivery through structured documentation, automated enforcement, and human-in-the-loop approval.


1. The Philosophy: "Plan-First"

The core tenet of this framework is that documentation is code. We do not write implementation code until the technical approach is documented, reviewed, and approved. This prevents architectural drift and ensures the AI assistant remains aligned with the user's intent.

The Golden Rule:

"If it isn't in the Work Log (or Spike Log), it doesn't happen."


2. The 3 Tiers of Execution

We categorize work into three tiers to balance speed with rigorous documentation.

Tier L1: Micro Tasks

  • Scope: Typo fixes, one-line bug patches, minor config tweaks.
  • Requirement: Commit Message Only. No logs required.
  • Example:
    git commit -m "fix(ui): correct padding on mobile header"
    

Tier L2: Minor Tasks

  • Scope: Component refactors, small features, tech debt cleanup (limited scope).
  • Requirement: Mini-Log. Append an entry to docs/project_management/daily_log.md.
  • Template:
    ## [YYYY-MM-DD] Task Name
    - **Context:** Brief sentence on what/why.
    - **Changes:** List of components/files touched.
    - **Verification:** How did you check it works?
    

Tier L3: Major Tasks

  • Scope: Architecture changes, new Epics, complex logic (e.g., "Add Offline Mode").
  • Requirement: Full Work Log. Create a new file in docs/project_management/logs/.
  • Workflow:
    1. Draft: Copy templates/work_log.md -> logs/YYYY-MM-DD_feature.md. Fill "Context" & "Approach".
    2. Approve: User reviews and approves the approach. Status -> In Progress.
    3. Implement: Write code. Update the log's "Execution Notes" as you go.
    4. Verify & Close: Run tests. Status -> Complete.

Exception: The "Spike"

  • Scope: Research, prototyping, or validating a risky library.
  • Process: Create a Spike Log. Write throwaway code.
  • Rule: Once validated, delete the code and start a proper L3 task for the real implementation.

3. Tooling & Setup

We use a Python-centric toolchain (uv) to ensure reproducibility.

Prerequisites

  • Python 3.12+: Managed via uv.
  • Node.js / Bun: Managed via bun.
  • Tooling:
    • uv tool install go-task-bin (Task runner).
    • uv tool install pre-commit (Git hooks).
    • bun add -d node-jq (JSON processing for AppSync evaluation).
    • uvx --with "mdformat-mkdocs[recommended]" mdformat (Formatter).

Directory Structure

.
├── Taskfile.yml                  # Task automation
├── .pre-commit-config.yaml       # Git hook configuration
├── docs/
│   ├── architecture/             # System design, data models
│   ├── project_management/
│   │   ├── index.md              # Roadmap
│   │   ├── daily_log.md          # L2 Mini-Logs
│   │   ├── templates/            # L3 Work Log Template
│   │   └── logs/                 # L3 History

Automated Enforcement

We use mdformat with the mkdocs plugin to enforce strict 4-space indentation and AST consistency.

Commands:

  • Check: task lint:md
  • Fix: task lint:md:fix (Run this before every commit).

Pre-commit Hook: Our .pre-commit-config.yaml automatically runs the linter and tests on every commit.

Documentation Stack (MkDocs)

We use MkDocs with the Material theme to treat documentation as a first-class product.

  • Theme: mkdocs-material provides professional navigation, search, and dark mode.
  • Plugins:
    • mkdocs-awesome-pages-plugin: Automatically generates navigation from file structure (no manual nav maintenance).
    • search: Native client-side search.
  • Serving: We use uvx to run MkDocs without installing it in the node project:
    task docs # Runs: uvx --with mkdocs-material ... mkdocs serve
    

4. Best Practices

Context Memory (AI-Only)

When finishing an L3 task, generate a Context Memory block at the bottom of the log. This helps the AI resume context in the next session.

## 7. Context Memory (AI-Only)
!!! abstract "Summary for Future Context"
    (Auto-generated summary...)

Continuous Documentation

Avoid "documentation debt" by updating architecture (docs/architecture/) files while implementing the code. The Work Log is the story of the change; the Architecture folder is the state of the system.


5. Reference Configuration

To replicate this environment, ensure your configuration files match these standards.

A. Taskfile.yml (Automation)

version: '3'

tasks:
  docs:
    desc: Serve documentation locally
    cmds:
      - uvx --with mkdocs-material --with mkdocs-awesome-pages-plugin mkdocs 
        serve

  lint:md:
    desc: Check Markdown formatting
    cmds:
      - uvx --with "mdformat-mkdocs[recommended]" mdformat --check docs/ 
        README.md --number

  lint:md:fix:
    desc: Fix Markdown formatting
    cmds:
      - uvx --with "mdformat-mkdocs[recommended]" mdformat docs/ README.md 
        --number

B. .pre-commit-config.yaml (Enforcement)

Setup:

uv tool install pre-commit
pre-commit install

Example .pre-commit-config.yaml:

repos:
  - repo: local
    hooks:
      - id: lint-md
        name: Lint Markdown (mdformat)
        entry: task lint:md
        language: system
        pass_filenames: false
        always_run: true

Note: This configuration can be extended to include other on-commit work, such as running unit tests (npm run test) or code linters.

C. mkdocs.yml (Documentation Engine)

theme:
  name: material
  features:
    - navigation.instant
    - navigation.tabs
    - content.code.copy
    - content.action.edit

markdown_extensions:
  - admonition
  - pymdownx.details
  - pymdownx.superfences # Advanced code blocks
  - pymdownx.tabbed: # Tabbed content
      alternate_style: true
  - pymdownx.tasklist:
      custom_checkbox: true
  - pymdownx.emoji

plugins:
  - search
  - awesome-pages # Auto-navigation from folder structure

6. Reference: Work Log Structure

This reference outlines the required structure for a Full Work Log (Tier L3).

1. Context

  • Goal: What are we trying to achieve?
  • Trigger: Why now? (User request, bug report, strategic goal)

2. Approach

  • Technical Strategy: High-level implementation plan.
  • Key Decisions: Rationale for specific choices (e.g., library selection).
  • Testing Strategy: Define what must be tested vs. what can be skipped.

3. Impact Analysis

  • Files to Modify: List of expected file changes.
  • Risks: Potential side effects or regressions.

4. Execution Plan

  • Step-by-step checklist of actions.

5. Execution Notes

  • Details discovered during implementation.
  • Tech Debt / Future Improvements.

6. User Approval & Key Learnings

  • Confirmation of completion and lessons learned.

7. Context Memory (AI-Only)

  • Summary for future context to maintain continuity across sessions.