Skip to main content
Skills are just versioned playbooks: a folder that contains a SKILL.md file with YAML frontmatter (name + description) and a markdown body. goldengoose (and the providers it integrates with) can discover skills from your repo, which means you can check your team’s workflows into source control alongside the code they’re meant to ship.

Where to put them

For repo-scoped skills, use:
  • .agents/skills/<skill-name>/SKILL.md
That’s the format we recommend because it keeps the workflow close to the project and makes it easy for your lead (or any delegate) to reuse the same process.

RPI (Research → Plan → Implement)

Save this as .agents/skills/rpi/SKILL.md:
---
name: rpi
description: Ship complex features with a simple 3-stage pipeline(Research agent -> Planning agent -> Codex feature supervisor implementation)
compatibility: Requires goldengoose team tools (gg_team_manage/gg_team_message/gg_team_status) and project check commands.
metadata:
  author: goldengoose
  version: "1.0.0"
---

# RPI (Research -> Plan -> Implement)

Use this skill when the user aska feature is non-trivial and you want a lead-agent workflow that is fast, structured, and reliable.

## Goal
Ship complex features with this exact pipeline:
1. Research with a `research` agent.
2. Plan with a `codex` agent.
3. Implement with a `codex` supervisor (ask it to use `feature-supervisor` skill)

Keep prompts simple and outcome-focused.

## Inputs
- `team_id`
- `lead_agent_id`
- `feature_info`


## Stage 1: Research Agent

Understand the feature that the user is asking for. Think about the likely systems in our codebase that this feature will span. This will help you write a good research prompt.

Add a `research` agent.

Do NOT describe the feature that the user specified or asked for. You should do that in stage 2 (planning). This is a research pass designed to map out the likely systems involved, current dependencies, control and data flow as they exist in the codebase today into a markdown document. This will help the planning and feature supervisor agents understand the system at a high level and find what they need quickly. 

Prompt should ask for:
- Architecture reconnaissance of the existing system in areas/modules likely involved
- Likely subsystems/files this feature would touch
- patterns in the codebase that already exist for similar things
- Current data/event/control flow in those paths
- Integration points, dependencies, and risks
- A report written to `gg/<feature>-research-report.md` 

Required completion DM:
- Report path
- Summary

Lead action after DM:
- Do not read docs directly; use the agent summary + file list for scoping
- Keep the research agent on team for follow-up unless user says remove

Prompt template:
"Without implementing anything, map out the current codebase architecture likely related to this upcoming feature.focus on mapping existing systems and likely touchpoints. Produce `gg/<feature>-research-report.md` with: architecture map, likely affected files/modules, data/event flows, integration boundaries, risks, and file references. DM me with report path, summary, and files inspected."

## Stage 2: Planning Agent (`codex`)
Create one `codex` agent.

Give it:
- the full feature spec (`feature_spec_full`)
- the research report path

Ask for an implementation-ready plan at:
- `gg/<feature>-implementation-plan.md`

Required plan structure:
- State of Current System
- State of Ideal System
- Plan Phases

For each phase require:
- Files to read before starting
- What to do

Also require:
- Cross-provider requirements (if relevant)
- Validation strategy per phase
- Risks/fallbacks

Required completion DM:
- Plan path
- Summary
- Full file list inspected

Lead action after DM:
- Do not read docs directly; use the plan agent summary to drive supervisor kickoff
- Confirm it is handoff-ready for implementation

Prompt template:
"Read the full feature specification below and then read `gg/<feature>-research-report.md` fully, inspect referenced code, and produce `gg/<feature>-implementation-plan.md` with: State of Current System, State of Ideal System, and Plan Phases. For each phase include files-to-read and what-to-do. Include cross-provider requirements, validation strategy, and risks/fallbacks. DM me with plan path, summary, and files inspected."

## Stage 3: Implementation Supervisor (`codex`)
Create one `codex` supervisor agent.

Supervisor instructions:
- Use `feature-supervisor` skill and follow it strictly
- Read both docs fully:
  - `gg/<feature>-research-report.md`
  - `gg/<feature>-implementation-plan.md`
- Supervise all phases to completion
- Send periodic DMs to lead with:
  - current phase
  - active implementer(s)
  - blockers/risks
  - what shipped
- Send final completion DM with:
  - all shipped phases
  - commit list
  - validation outcomes
  - remaining risks/follow-ups
  - full modified-file union

Standby rule:
- Once supervisor creates an implementer, it should stand by until implementer replies.
- Lead should also stand by and wait for supervisor updates unless user asks otherwise.

Prompt template:
"Use `feature-supervisor` skill. Read research + plan docs fully. The skill has all the instructions you need. Supervise implementation phase-by-phase to completion with periodic DMs and final completion report. After spawning each implementer, stand by until implementer replies."

## Lead Operating Rules
- Do not implement code directly; supervise and gate.
- Do not read research/plan docs directly; rely on DM summaries and completion reports from agents.
- When user gives a full spec, keep research prompt architecture-first and spec-light; pass full spec to planning.
- Keep scope per phase; do not batch unrelated work.
- Run gates before each ship:
  - frontend-only -> `bun run check:frontend`
  - rust-only -> `bun run check:rust`
  - mixed -> `bun run check:all`
- Use `gg_process_run` for long-running checks.
- Commit/push only after green checks for that phase scope.

## Completion Criteria
RPI is complete when:
- Research doc exists and is reviewed
- Plan doc exists and is reviewed
- Supervisor final DM confirms all phases shipped with commits + passing gates

Feature Supervisor

Save this as .agents/skills/feature-supervisor/SKILL.md:
---
name: feature-supervisor
description: Supervise multi-phase feature implementations by delegating each phase to an implementer agent, waiting for completion, performing a thorough diff-based code review, requesting fixes via DM, then running checks, committing, pushing, and moving to the next phase.
compatibility: Requires goldengoose team tools (gg_team_manage/gg_team_message) plus git and project check commands (bun/cargo/etc.).
metadata:
  author: goldengoose
  version: "1.0.0"
---

# Feature Supervisor

You are a **feature supervisor**. Your job is to drive a feature to completion by supervising a multi-phase plan:

1. Read the provided research + implementation plan.
2. Assign exactly one phase at a time to a dedicated implementer agent.
3. Do not disturb the implementer while they work.
4. When they DM completion, review the diffs like a senior reviewer.
5. Iterate with the implementer until the phase is correct and complete.
6. Run the required checks, then commit + push.
7. Remove the implementer and repeat for the next phase.
8. When all phases are complete, DM the lead agent with a final completion report.

This skill is designed to be **process-heavy and interruption-light**: clear assignments, no status pings, tight review cycles.

## Inputs you need (ask for anything missing)

- `team_id`: The gg team you’re supervising.
- `lead_agent_id`: Who to report completion to (DM).
- `implementation_plan_path`: Path to the implementation plan doc (required).
- `research_paths`: Zero or more research doc paths (optional, but read them if provided).
- `phase_number`: Which phase you are assigning now (default: 1).
- `implementer_model_preset`: The model preset for the implementer (usually specified by the user/lead).
- `checks_to_run`: Exact commands to run when a phase is complete (prefer user-specified).
  - If not provided, infer from the repo conventions (examples below).

## Post-Compaction Instructions

Do the following **ONLY** if you were notified that your context window was recently compacted (by the user / lead agent).

1) Immediately re-check the working tree and diffs:
   - `git status`
   - `git diff`
   - `git diff --staged` (if you stage during review)

   The compaction summary should include which phase you were reviewing and which phases are already completed. Combine that summary with the current `git diff` to reconstruct:
   - Which phase you’re currently on
   - Which implementer agent (`agent_id`) you assigned to that phase

   If the working tree is clean (no diff) and you were compacted **between phases** (e.g., after shipping but before removing/adding an agent), skip the steps below and continue normally with the phase loop.

Follow these steps only if the working tree is **dirty**:

2) Use `gg_team_status` to check whether the implementer agent is currently active or idle. If they are idle, they are either:
   - Waiting for a long-running command to be auto-injected, or
   - Waiting for you to review their changes

   You can tell which one it is because the status output includes the agent’s last message.

3) Decide whether to wait or review:
   - If they are **active and working**, or **idle but waiting for a long-running command injection**, stand by and wait for them to DM you completion. Do not poll or DM for status.
   - If they are **idle and waiting for your review**, proceed to step 4.

4) Recovery flow (do this in order):
   - Stash the dirty working tree locally with a unique name.
   - Do Step 0 of the phase loop: re-read the research and implementation plan(s) to refresh your memory and re-establish the current system state.
   - Pop the stash.
   - Continue with Step 3 of the phase loop (Code Review mode → iterate → ship → remove agent → repeat).

## Phase loop (do this for every phase)

### 0) Read the plan + understand current system state (before assigning work)

1. Read **all** research documents if paths are provided.
2. Read the implementation plan **fully**.
3. Identify the current phase section and extract:
   - Goals and explicit acceptance criteria.
   - File paths mentioned for the phase.
   - Any cross-phase constraints you must preserve.
4. Read **all code files mentioned in the current phase** to understand:
   - What exists today.
   - What the phase must change/add.
   - Where integration boundaries and risks are.

If the plan references additional files indirectly (types, shared utilities, a command registry, etc.), read those too until the phase is unblocked.

### 1) Create an implementer agent

Use `gg_team_manage` to add exactly one new agent for the phase.

- Title suggestion: `Phase {N} Implementer`
- The implementer’s model preset should come from `implementer_model_preset`.
- Put the **full assignment** in the implementer’s `prompt` field (do not send an initial assignment DM).

Use the **Implementer Prompt Template** appended at the bottom of this file.

After creating the implementer agent:

- Do **not** ping for status.
- Do **not** “check in”.
- Do **not** poll or pressure the implementer.
- Stand by until they DM completion (completion will be injected as a message).

### 3) On completion DM: enter Code Review mode

When the implementer says the phase is complete, switch to Code Review mode:

1. Inspect changes with `git status` and `git diff` (and/or file-level diff tools).
2. Read every changed file end-to-end.
3. Check the implementation against the plan phase requirements and acceptance criteria.
4. Verify edge cases and failure modes, not just the happy path.
5. Check that tests were updated/added where business logic changed.

Use the **Review Checklist** appended at the bottom of this file.

### 4) If anything is missing/buggy: DM implementer with required changes

Your feedback DM should be:

- Concrete: file paths + what to change + why.
- Test-aware: specify what test(s) should be added/adjusted.
- Phase-scoped: avoid scope creep unless the plan requires it.

Then immediately stop again and wait for the implementer to DM completion.

Repeat review → feedback → wait until the phase is fully correct and you are satisfied with the implementation.

### 5) Once satisfied: run checks, commit, and push

When the phase looks correct:

1. Run the required checks (`checks_to_run`).
2. If checks pass, **commit and push**.
3. Record the commit SHA for reporting.

If this repo uses Bun/Tauri-style gates and you don’t have explicit commands, typical defaults are:

- Frontend-only: `bun run check:frontend`
- Rust-only: `bun run check:rust`
- Mixed changes: `bun run check:all`

Run all commands using `gg_process_run` and do not poll for status (wait for auto-injected completion).
For Rust/Cargo checks specifically, run them sequentially only (never in parallel).

### 6) Remove implementer agent and advance to next phase

After the phase is landed:

1. Remove the implementer agent using `gg_team_manage` (`remove_agent_ids` - string[]).
2. Notify the team lead about which phase just landed (concise update message)
3. Move on to the next phase and repeat the loop (Create agent teammate and send task in prompt, review, ship, remove agent teammate) until all phases in the feature is complete.

## Completion report (after final phase)

When all phases are complete:

- DM the lead agent a final report:
  - “All phases complete”
  - Final commit SHA(s) or range
  - What checks were run and their outcomes
  - Any known limitations or follow-up work

---

## Implementer Prompt Template (paste into `gg_team_manage.prompt`)

Use this as the implementer agent’s **base prompt** when you create them via `gg_team_manage`.

Keep it short, phase-scoped, and unambiguous. Do not send an initial assignment DM; reserve DMs for code review feedback and clarifications.

```text
You have been assigned to complete Phase {PHASE_NUMBER} of the implementation plan.

Read first (required)
1. Read the implementation plan fully: {IMPLEMENTATION_PLAN_PATH}
2. Read the research docs fully (if provided):
   - {RESEARCH_PATH_1}
   - {RESEARCH_PATH_2}
   - ...

Phase {PHASE_NUMBER} scope
Goal: {ONE_SENTENCE_PHASE_GOAL}

Definition of done:
- {ACCEPTANCE_CRITERION_1}
- {ACCEPTANCE_CRITERION_2}
- ...

Required prep (before editing)
In the Phase {PHASE_NUMBER} section of {IMPLEMENTATION_PLAN_PATH}, find the “files to read before editing” (or similarly named) subsection.
Then read every file listed in that subsection end-to-end to understand the current state before you start making changes.

Feel free to read additional files if needed to understand adjacent systems to gain more context before you start making changes.

Implementation expectations
- Implement the full Phase {PHASE_NUMBER} scope.
- Add/update tests where business logic changes.
- Follow repo patterns and existing architecture; avoid introducing new patterns unless the plan calls for it.
- Do not commit/push unless explicitly requested; leave changes in the working tree.

Required checks (run before you DM completion)
Run:
{CHECK_COMMAND_1}
{CHECK_COMMAND_2}

For Rust/Cargo checks, always use the native process runner (`gg_process_run`) and run them sequentially only (never in parallel).
If a check is long-running, use the native process runner (`gg_process_run`) if available.
When using this tool, do not poll for status. Run the command, then end your turn. The result will be inserted into the conversation when the command exits or fails.

When done
DM me (<my_agent_id>) with:
1. Confirmation that Phase {PHASE_NUMBER} is complete
2. A bullet summary of changes
3. The list of files changed
4. Any known risks or TODOs (should be empty if truly complete)

Then do not make further changes until you hear back from review.
```

---

## Review Checklist (use during Code Review mode)

Use this checklist when an implementer DMs that a phase is complete.

### Plan alignment

- Does the diff fully implement the **current phase** (not the entire plan, not a partial subset)?
- Are all acceptance criteria satisfied (explicit + implied)?
- Did the implementer accidentally drift into the next phase or unrelated refactors?

### Correctness and edge cases

- Are error cases handled (missing data, invalid inputs, timeouts, network failures)?
- Are boundary conditions covered (empty lists, null/undefined, large inputs, concurrency)?
- Are retries/backoff/timeouts reasonable where needed?
- Are feature flags / gating rules honored if the codebase uses them?

### Architecture and maintainability

- Does the change match existing module boundaries and patterns?
- Are responsibilities clear (no “god” functions/files)?
- Is the implementation simple and direct (no unnecessary abstraction)?
- Is logging/telemetry consistent with existing patterns?

### Performance and UX (if applicable)

- Any obvious hot paths worsened (extra polling, heavy re-renders, N+1 queries)?
- Any UI jank risks (unbounded lists, missing virtualization, expensive parsing)?
- Are loading/error/empty states correct?

### Tests and quality gates

- Are there tests for critical behavior and regression cases?
- Do tests fail deterministically (no timing flakes)?
- Are assertions meaningful (not just snapshot spam)?
- Are check commands updated if workflow changed?

### Security and safety (if applicable)

- Any secrets introduced into code or logs?
- Are permission boundaries and authorization checks correct?
- Are inputs validated/sanitized at trust boundaries?

### “Ready to land” criteria

- No TODOs for core behavior in this phase.
- No known failing checks.
- Change is understandable when reading the diff cold.
- The lead agent can ship/merge the commit without extra cleanup.