Slash Commands for Coding Agents: Getting the Most Out of LLMs
Four repeatable prompts I use to give coding agents the right context for productive 30+ minute sessions that stay on track.
I've been using Claude Code and Conductor pretty heavily for development work. These tools can run agents for 30+ minutes and stay on track, but only if you give them the right context and a productive loop to work in.
After running a bunch of these longer sessions, I noticed patterns in what worked. The agents that delivered the best results all started with similar types of context and followed similar workflows.
So I turned those patterns into four slash commands that I can drop into any coding agent to get consistent results.
The Four Commands
/feature
- Start with a spec
This kicks off any substantial feature by making the agent gather requirements first.
Build a new feature by following this workflow:
Research: Read all documents in the /docs folder and the root README to understand the feature documentation process. specifically read "docs/how-to-use-docs.md" to understand the workflow.
Gather requirements: Conduct a structured conversation with the user, asking targeted follow-up questions to capture all necessary context and identify gaps
Document: Write a comprehensive feature specification based on the collected information
Your primary objective is producing a complete feature specification.
The agent reads your existing docs, asks clarifying questions, and produces a spec before writing any code. Saves a lot of backtracking later.
/plan
- Maintain honest checklists
This creates and maintains implementation checklists that actually reflect reality:
Update the implementation plan checklist to reflect current codebase state:
Research: Read the implementation plan methodology docs in the appropriate docs subdirectory for the active feature
Verify: Examine the codebase and run terminal commands to confirm current implementation status
Update: Mark checklist items as complete [X] or incomplete [ ] based on actual state
Clarify: Ask the user for input if the active feature or branch is unclear
Goal: Ensure the markdown checklist accurately reflects development progress using the repository's established methodology.
The key part is "based on actual state." The agent checks the code and runs commands to see what's actually implemented. No more stale docs.
/test
- Hit 99% coverage systematically
Test and prepare for merge:
Run npm run test and npm run test:coverage to establish baseline
Iteratively add intelligent tests, running coverage after each addition until reaching 99%+ coverage
Run npm run build and npm run lint periodically to ensure no regressions
Use ESLint ignore comments only when absolutely necessary to exclude lines from coverage
Complete when: coverage ≥99%, all tests pass, build succeeds, linting passes, and existing functionality remains intact
The "intelligent tests" part matters. It's not about gaming metrics. It's about tests that actually catch things. I've found it best to add tests while building features rather than building arbitrary tests after the fact.
/commit
- Consistent commit messages
Commit every pending change on this branch, then return and simply confirm completion—no extra commentary.
The full command reviews changes and writes messages that focus on why, not just what. Much cleaner git history.
The Docs System That Makes It Work
Here's the thing though—these commands are only as good as the documentation they work with. And that's where the real magic happens.
I use a structured docs system that creates a complete paper trail for every feature. Each feature gets its own folder with four phases:
Phase 1: Specification (01-specification.md
)
The /feature
command writes this. It includes:
- User stories and acceptance criteria
- Technical requirements and constraints
- Architecture decisions and rationale
- Integration points with existing systems
Phase 2: Implementation Plan (02-implementation.md
)
The /plan
command maintains this checklist:
- Set up project structure
- Implement core authentication logic
- Add password reset functionality
- Create user profile management
- Write comprehensive tests
- Update documentation
Using a markdown file with checkboxes makes it easy to visually track progress and resume work if interrupted. The agent regularly verifies this against the actual codebase and updates it.
Phase 3: Development Notes (03-development.md
)
Document decisions, roadblocks, and discoveries as you build.
Examples:
- Started with OAuth integration, ran into CORS issues
- Switched to session-based auth, much cleaner with our existing middleware
- Added password strength validation, discovered edge case with special characters
Phase 4: Retrospective (04-retrospective.md
)
After the feature is done, I document:
- What worked well
- What I'd do differently
- Lessons learned
- Technical debt created
- Future enhancement ideas
Why It Works
Everything is documented. When you need to modify something later, you can read the spec and understand the decisions.
The agent gets better context. When it can read the complete history of a feature, it gives better suggestions.
It scales. Small features get lightweight docs. Complex features get comprehensive documentation.
One feature = one PR. Keeps changes focused.
Real Example: User Authentication Feature
Here's what a real feature looks like in this system:
docs/features/user-authentication/
├── 01-specification.md # What we're building and why
├── 02-implementation.md # Step-by-step checklist
├── 03-development.md # Daily development notes
└── 04-retrospective.md # What we learned
Specification defines OAuth vs session-based auth, password requirements, security considerations. Implementation plan breaks it into 15 tasks, verified against actual codebase. Development notes capture the OAuth to sessions switch.
Getting Started
If you want to try this, here's the exact setup I use:
1. Create Your Docs Workflow
First, create docs/how-to-use-docs.md
with this template (customize for your needs):
Create a file with these sections:
Header:
# How to Use Docs: Feature Development Workflow
Overview section:
## Overview
The feature development process consists of four distinct phases:
1. **Feature Specification** - Define what to build
2. **Implementation Checklist** - Plan how to build it
3. **Iterative Development** - Build it step by step
4. **Retrospective** - Document what was done and lessons learned
**Important**: Each feature folder corresponds to one pull request.
File structure:
## File Structure
docs/features/[feature-name]/
├── 01-specification.md # What we're building and why
├── 02-implementation.md # Step-by-step checklist
├── 03-development.md # Daily development notes
└── 04-retrospective.md # What we learned
Phase templates:
Phase 1 - Feature Specification:
- User stories and acceptance criteria
- Technical requirements and constraints
- Architecture decisions and rationale
- Integration points with existing systems
Phase 2 - Implementation Checklist:
- Break down into specific, actionable tasks
- Regularly verify checklist against actual codebase state
Phase 3 - Development Notes:
- Document decisions, roadblocks, and discoveries as you build
- Track daily progress and architectural changes
Phase 4 - Retrospective:
- What worked well
- What you'd do differently
- Lessons learned
- Technical debt created
- Future enhancement ideas
2. Set Up Your Slash Commands
Add these four commands to your AI tool:
/feature
:
Build a new feature by following this workflow:
Research: Read all documents in the /docs folder and the root README to understand the feature documentation process. specifically read "docs/how-to-use-docs.md" to understand the workflow.
Gather requirements: Conduct a structured conversation with the user, asking targeted follow-up questions to capture all necessary context and identify gaps
Document: Write a comprehensive feature specification based on the collected information
Your primary objective is producing a complete feature specification.
/plan
:
Update the implementation plan checklist to reflect current codebase state:
Research: Read the implementation plan methodology docs in the appropriate docs subdirectory for the active feature
Verify: Examine the codebase and run terminal commands to confirm current implementation status
Update: Mark checklist items as complete [X] or incomplete [ ] based on actual state
Clarify: Ask the user for input if the active feature or branch is unclear
Goal: Ensure the markdown checklist accurately reflects development progress using the repository's established methodology.
/test
:
Test and prepare for merge:
Run npm run test and npm run test:coverage to establish baseline
Iteratively add intelligent tests, running coverage after each addition until reaching 99%+ coverage
Run npm run build and npm run lint periodically to ensure no regressions
Use ESLint ignore comments only when absolutely necessary to exclude lines from coverage
Complete when: coverage ≥99%, all tests pass, build succeeds, linting passes, and existing functionality remains intact
/commit
:
Commit every pending change on this branch, then return and simply confirm completion—no extra commentary.
3. Test with a Small Feature
Pick something simple for your first try. Maybe a utility function or a small UI component. Go through the full workflow to get a feel for it.
4. Customize for Your Stack
The commands I shared work for JavaScript/TypeScript projects with npm. Adjust the test commands, linting tools, and build processes for your tech stack.
Bottom Line
These commands give coding agents the context they need for productive long sessions. The docs system creates a paper trail so you never lose context.
More work upfront, but it pays off when you need to modify or extend something later.
Makes working with AI more predictable when you know you'll get consistent results.