How GitHub Copilot Instructions Supercharged Our Code Reviews (and My Coffee Habit)

Learn how GitHub Copilot instruction files transformed our code review process with actionable examples and step-by-step setup.

TL;DR

We used GitHub Copilot instruction files to automate repetitive review checks, increase clarity, and improve onboarding. Add a `.github/copilot-instructions.md` file, define a few project-specific rules, and use Copilot Chat with `@workspace` to get guided, actionable PR feedback in minutes.

Start in 10 Minutes (Quick Checklist)

  • Create .github/copilot-instructions.md
  • Add your review rules (start small)
  • Add optional review.mdbranch-review.mdgit-review.md
  • Ask Copilot: @workspace review this file
  • Iterate weekly based on team feedback
AI supported Code Review

Prologue: Powered by Caffeine and Curiosity

There’s something about that first sip of coffee that sharpens your focus—especially during code reviews. We used GitHub Copilot instruction files to make reviews faster, clearer, and friendlier. This post walks through the exact files, and prompts we built to make reviews meaningful and repeatable.

The Problem: Code Reviews, the Good, the Bad, and the Tedious

Code reviews are the backbone of quality software, but they can be a slog—endless nitpicks, unclear expectations, and the dreaded “LGTM” (“Looks good to me”) with no context. We wanted to make reviews more consistent, reduce reviewer fatigue, and encourage best practices without sounding robotic. Enter: GitHub Copilot instruction files.

What Are GitHub Copilot Instruction Files?

GitHub Copilot instruction files are markdown documents that teach your workspace Copilot about your team’s standards, patterns, and workflow. Instead of hoping Copilot “gets” your team’s vibe, you give it a playbook: how to review, what to look for, and how to communicate.

In our workflow we created three instruction files—review.mdbranch-review.md, and git-review.md—each tailored to a stage of development. These are living documents that encode our team’s values and priorities and help guide both AI-assisted and human reviews.

Building the Review Experience: Best Practices in Action

Here’s what we learned while crafting our instruction files.

1. Start Human, Stay Human

Reviews are ultimately about people, not just code. Use a friendly tone, celebrate wins, and give feedback the way you would over coffee with a teammate. Being constructive sets the right mood for both AI-assisted and human reviewers.

Pro Tips

  • Keep prompts conversational and concise.
  • Explicitly ask for clarifications rather than assumptions.

2. Be specific, But Not Overbearing

Vague instructions lead to vague reviews. Break expectations into clear, actionable points: what to check (logic, style, tests, docs), what’s a blocker versus a nice-to-have, and how to handle uncertainty.

Pro Tips

  • Use checklists for repeatable items (tests, a11y, docs).
  • Mark rules as Critical/Major/Minor so reviewers know the priority.

3. Automate the Boring Stuff

Let automation handle linting, formatting, and import order—freeing humans to focus on architecture, design, and intent.

Pro Tips

  • Run `nx lint <project> --fix` or repo-specific linters automatically.
  • Ask Copilot to skip items that automated tools already verify.

4. Context Is King

Instruction files should reference project-specific guidelines—style guides, accessibility rules, and design token strategies—so feedback remains relevant.

Pro Tips

  • Link to docs, README sections, or concrete examples in the repo.
  • Provide small code snippets in the instruction files to illustrate preferred patterns.

5. Encourage Learning, Not Just Policing

Use instructions to nudge reviewers to explain why something matters. This turns reviews into teaching moments, helping junior devs learn faster.

Pro Tips

  • Request explanation-oriented feedback, not just fixes.
  • Celebrate good patterns in the “Strengths” output.

What Improved: The Measurable Impact

Since adopting Copilot instruction files, we’ve seen:

  • Faster review cycles
  • Clearer, more consistent PR feedback
  • Quicker onboarding for new team members
  • Less reviewer fatigue and more constructive engagement

Before & After: What a PR Looks Like with Copilot Instructions

Before:

A short PR description like “Fixed stuff and added tests,” leaving reviewers guessing.

After:

A structured description generated by Copilot:

  • Summary of changes
  • Impacted modules
  • Testing notes
  • Accessibility considerations
  • Linked issues

This gives reviewers context immediately and reduces back-and-forth.

Real Review Output Example

Here’s an excerpt from an actual branch review using our instruction files. The developer implemented a new app layout with sidebar and toolbar components:

Prompt:

@workspace review my branch changes against develop using branch-review guidelines

Copilot Output (excerpt):

### 🌿 Branch Information
- **Current Branch**: feature/new-layout
- **Target Branch**: origin/development
- **Files Changed**: 42
- **Lines Added**: 1504 | **Lines Removed**: 59
- **Change Type**: Feature (new app layout, sidebar, toolbar, i18n, tests)

### ✅ Strengths
- Follows Angular best practices (standalone components, signals, OnPush, modern control flow)
- Excellent separation of concerns: core/services, shared/components, pages
- All new components have `.spec.ts` tests with good coverage
- Accessibility and i18n considered (ARIA labels, translation keys, no hardcoded text)
- SCSS follows BEM and design token conventions
- No deprecated Angular APIs or structural directives

### ⚠️ Issues Found

#### Critical
- [ ] i18n coverage incomplete: Only en.json has full keys; de.json and fr.json 
      missing translations. Will break non-English users.

#### Major
- [ ] Branch hygiene: Merge commit instead of rebase for clean PR history
- [ ] No README or Storybook for new components (required for documentation)
- [ ] No density documentation or demonstration

#### Minor
- [ ] Some ARIA labels use hardcoded English instead of translation keys
- [ ] Use input signals instead of classic `@Input()` where possible
- [ ] Add explicit accessibility tests for keyboard navigation

### 🔧 Specific Recommendations
1. **i18n**: Copy all keys from en.json to de.json and fr.json
2. **Branch Hygiene**: Rebase onto origin/development
3. **Component Documentation**: Add README.md and .stories.ts for each component
4. **Density Support**: Document and demonstrate density tokens in Storybook

### 🚀 Merge Assessment
- **Ready to Merge**: No (critical i18n and documentation issues)
- **Risk Level**: Medium (i18n breakage for non-English users)
- **Breaking Changes**: No
- **Estimated Review Time**: 1-2 hours

Notice how the review provides a complete picture: branch stats, categorized issues with severity levels, actionable recommendations, and a clear merge readiness assessment. The structured output makes it easy to create a todo list and prioritize fixes.

Tips for Crafting Your Own Instruction Files

  1. Keep It Human. Use a conversational tone and explain the “why.”
  2. Iterate and Improve. Your first draft won’t be perfect—review and refine.
  3. Balance Strictness and Flexibility. Call out non-negotiables versus guidelines.
  4. Document the Why. Short rationales increase buy-in.
  5. Encourage Questions. Invite reviewers to ask clarifying questions.
  6. Integrate with Your Workflow. Store files in `.github/` and reference them in PR templates.
  7. Celebrate Successes. Positive feedback matters.

How Should a Good Instruction File Look? (With Examples)

Below are real, adapted examples you can copy into `.github/review.md` or use as workspace-level instructions.

Example 1: Friendly Review Checklist

# Code Review Checklist

- [ ] Is the code easy to read and understand?
- [ ] Are there tests for new logic?
- [ ] Does the code follow our style guide?
- [ ] Are all user-facing texts translatable?
- [ ] Is accessibility (a11y) considered?
- [ ] Are there any obvious performance issues?

Example 2: Encouraging Constructive Feedback

## How to Give Great Feedback

- Start with something positive
- Be specific about what can be improved
- Explain why a change is needed (not just what)
- Suggest alternatives if possible
- Thank the author for their work!

Example 3: Project-Specific Guidance (anonymized)

## Project Guidelines

- Use `@if` and `@for` instead of `*ngIf`/`*ngFor` in Angular templates
- All components must support density tokens and smooth transitions
- Never use `--mat-sys-*` tokens directly; use semantic tokens like `--my-component-action-button-color`
- Add `data-test` attributes for all interactive elements

Example 4: Automation Reminders

## Automation

- Run `nx lint <project> --fix` before submitting a PR
- Use Copilot to check for import order, spacing, and formatting
- Let automation handle the boring stuff—focus reviews on logic, design, and accessibility

Getting Started: Prerequisites & Setup

Prerequisites

  • GitHub Copilot subscription (Individual, Business, or Enterprise)
  • VS Code with the GitHub Copilot extension installed
  • Git repository where you can add instruction files

Setting Up Your Instruction Files

  1. Create `.github/copilot-instructions.md` at the repo root if it is not already there
  2. Add your review guidelines (start with the templates above)
  3. Optionally add `review.md`, `branch-review.md`, `git-review.md` in `.github/` and mention them in the instruction file.
  4. Open Copilot Chat and try: `@workspace please review my current file`

How to Use These Files in Practice

File review: `@workspace review this file using our review standards`

Branch review (pre-PR): `@workspace review my branch changes against develop using branch-review guidelines`

Pre-commit review: `@workspace review my staged changes and suggest a commit message`

What Copilot Can (and Can’t) Do: Managing Expectations

What Works Well

  • Pattern detection (missing tests, accessibility issues, style violations)
  • Consistency checking against documented patterns
  • Documentation review and best-practice reminders
  • Helping junior developers understand standards through explanations

Current Limitations

  • Not fully autonomous—explicit prompts are required
  • Token limits for very large diffs (use the free models if you have any in your subscription)
  • Logic and architectural nuance still require human judgment
  • Context requirements: diffs or file context needed for accurate reviews
  • Possible false positives—verify suggestions

Appendix: Make VS Code use Custom Review Commands

Want to type `/review` instead of a long `@workspace ...` command? You can either create your own VS Code extension (I will not go into details with that one), or you can use quick snippets.

Quick Snippets (No extension needed)

Building a VSCode extension just to type a bit less feels like overkill, but behold, there is an easy alternative. Just create VS Code snippets for quick access:

Add to `.vscode/copilot-chat.code-snippets`:

{
  "Review File": {
    "prefix": "/review",
    "body": [
      "@workspace use .github/review.md to review the current file in detail"
    ],
    "description": "Review current file with review.md"
  },
  "Review Branch": {
    "prefix": "/review-branch",
    "body": [
      "@workspace use .github/branch-review.md to review my branch changes against ${1:develop}"
    ],
    "description": "Review branch with branch-review.md"
  },
  "Review Commit": {
    "prefix": "/review-commit",
    "body": [
      "@workspace use .github/git-review.md to review my staged changes and suggest a commit message"
    ],
    "description": "Review staged changes with git-review.md"
  }
}

Now when you type `/review` in Copilot Chat, VS Code autocompletes the full command. It’s not as elegant as a custom participant, but it works immediately with zero setup.

Why Three Different Instruction Files? Understanding Their Roles

We use three separate instruction files: `review.md`, `branch-review.md`, and `git-review.md`. Each serves a specific purpose:

  • `review.md`: Deep-dive file-level checklist for quality, a11y, tests, and architecture.
  • `branch-review.md`: Pre-PR overview that compares your branch against mainline and generates a structured PR description.
  • `git-review.md`: Pre-commit checks for staged changes and conventional commit message suggestions.

Splitting files helps Copilot focus on the right concerns at each stage and makes guidance more relevant.

Epilogue: The Future of AI-Assisted Code Reviews

Small changes—like a well-crafted instruction file—can have outsized impacts. Code reviews are no longer something we dread; they’re a chance to learn, grow, and connect as a team. With thoughtful guidance, you can turn your review process from a bottleneck into a superpower.

Example: Anonymized Code Review Instruction File

Below is an anonymized and adapted version of our `review.md` instruction file. It includes a consolidated hard-rule callout.

# Code Review Instructions

You are an expert code reviewer for this project. When asked to perform a code review, follow these comprehensive guidelines:

## Review Checklist

### 1. Architecture & Design
- [ ] Follows project architecture patterns
- [ ] Components are standalone and properly organized
- [ ] Code is in the correct location (apps/libs)
- [ ] Dependencies between libraries are appropriate
- [ ] Monorepo structure is followed

> ⚠️ **Hard rule — block merge**
> No direct or indirect use of translation frameworks (e.g., ngx-translate, TranslateService, TranslatePipe) in publishable libraries. If present, mark as **Critical** and block until resolved.

### 2. Angular Best Practices
- [ ] Uses standalone components (not NgModules)
- [ ] Uses modern control flow syntax (`@if`, `@for`, `@switch`) instead of structural directives
- [ ] Uses CSS-based animations (with `animate.enter`/`animate.leave`) instead of deprecated triggers
- [ ] Uses OnPush change detection where appropriate
- [ ] Uses signals for reactive state management
- [ ] Proper component lifecycle usage
- [ ] No direct DOM manipulation (use framework abstractions)
- [ ] No inline templates or styles (use separate files)

### 3. Code Quality
- [ ] TypeScript strict mode compliance (no `any` types)
- [ ] Proper type definitions and interfaces
- [ ] Clear, descriptive variable and function names
- [ ] Code is DRY (Don't Repeat Yourself)
- [ ] Appropriate use of RxJS/operators
- [ ] Proper error handling

### 4. Testing
- [ ] Unit tests exist and follow conventions
- [ ] Tests use `describe` and `it` (not `test`)
- [ ] Tests are structured hierarchically
- [ ] Use `data-test` attributes for element queries
- [ ] Proper mocking of dependencies
- [ ] Tests cover success and failure paths
- [ ] 80%+ code coverage maintained

### 5. Accessibility (WCAG 2.1 AA)
- [ ] Semantic HTML elements used
- [ ] Proper ARIA attributes included
- [ ] Keyboard navigation supported
- [ ] Focus indicators visible
- [ ] Color contrast meets requirements (4.5:1 for normal text, 3:1 for large text)
- [ ] All ARIA text supports i18n

### 6. Internationalization (i18n)
- [ ] All user-facing text supports translation
- [ ] Translation keys are hierarchical (if used)
- [ ] No hardcoded user-facing strings
- [ ] Library components accept translated text via inputs/signals, or via translationProviders (populated by the app)

### 7. Styling & Design Tokens
- [ ] Uses SCSS with BEM methodology
- [ ] Semantic design tokens (not generic wrappers)
- [ ] No direct use of design system tokens in CSS rules
- [ ] Design tokens describe purpose (e.g., `--component-action-button-color`)
- [ ] Density support implemented correctly
- [ ] Smooth transitions for density changes
- [ ] Theme variables used for consistency

### 8. Density System
- [ ] Uses global density factor (e.g., `--density-factor`)
- [ ] Calculations in CSS variable definitions only
- [ ] Smooth transitions implemented
- [ ] Transition duration token defined
- [ ] Documented in component README
- [ ] Storybook density demonstration included

### 9. Component Structure
- [ ] Complete file set (`.ts`, `.html`, `.scss`, `.spec.ts`, `.stories.ts`, `README.md`)
- [ ] README documents purpose, usage, props, accessibility, density
- [ ] Storybook stories demonstrate key features
- [ ] Component published as secondary entry point (for libraries)
- [ ] Proper imports from dedicated entry points

### 10. Performance
- [ ] Lazy loading used where appropriate
- [ ] OnPush change detection for performance
- [ ] Minimal template expression complexity
- [ ] Proper subscription management

### 11. Security
- [ ] No sensitive data in code
- [ ] Input sanitization where needed
- [ ] Proper authentication/authorization checks
- [ ] No SQL injection or XSS vulnerabilities

### 12. Documentation
- [ ] Public APIs have JSDoc comments (libraries only)
- [ ] Complex logic has brief explanatory comments
- [ ] README files updated if needed
- [ ] Component documentation complete

## Review Output Format

Provide your review in the following format:

### ✅ Strengths
List what the code does well.

### ⚠️ Issues Found
Categorize issues by severity:
- **Critical**: Must fix before merge
- **Major**: Should fix before merge
- **Minor**: Nice to have improvements

#### 📝 Todo List of Issues
List all found issues as a markdown todo list for easy tracking. For example:
```
- [ ] Critical: Missing translation keys in fr.json
- [ ] Major: Component does not use OnPush change detection
- [ ] Minor: Improve variable naming in dashboard.service.ts
```

### 🔧 Specific Recommendations
Provide actionable suggestions with code examples where helpful.

### 📝 Additional Notes
Any other observations or suggestions.

## Review Tone
- Be constructive and respectful
- Explain the "why" behind suggestions
- Provide examples when possible
- Focus on learning and improvement
- Acknowledge good practices

Happy coding—and happy reviewing!

*If you enjoyed this post, let’s connect! Share your own review tips, workflow improvements, or Copilot experiences in the comments.*

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top