Training Your AI to Master Git & GitHub: The How-To Guide
A practical implementation guide for delegating your development workflow
This is the follow-up to "Stop Using Git, Stop Writing Tickets." In this guide, I'll walk through exactly how I trained AI to handle sophisticated Git workflows and GitHub issue management. You'll see the specific instructions I provided and the dramatic improvements in code documentation, traceability, and team coordination that resulted.
"Wait—you really don't run your own Git flow anymore?"
Not completely—but I've started handing it off.
And already, I'm seeing better structure, less overhead, and more leverage.
I taught Augment how to manage both Git and GitHub issues—intentionally, contextually, and with more discipline than I ever applied on my own.
Here's what that looks like.
Start with the principle: Git as narrative, not storage
The key shift in my approach was reframing Git's purpose. I stopped treating it as merely a code storage system and started seeing it as a tool for narrating engineering decisions.
This perspective change matters because most of what we struggle with in Git and GitHub is maintaining the narrative:
Commit messages that say "update" instead of explaining why
Branch names disconnected from actual issues
PRs with no summary of intent or impact
Bug reports missing context and acceptance criteria
It's not that we don't care. It's that we're busy. And narrative quality always breaks down under pressure.
That's why this handoff matters: AI agents are great at maintaining narrative discipline. They don't get tired, they don't skip steps, and they don't need calendar invites to stay aligned.
The goal: Git and GitHub with full context and zero drift
I didn't just want automation. I wanted:
Feature branches tied to specific issues with clear naming conventions
Commits that followed Conventional Commits format
GitHub issues structured with proper taxonomies and clear acceptance criteria
PRs that told a coherent story of what changed and why
Zero drift between code changes and their documentation
And I got it.
How I taught it: Real reference documentation, not prompts
This wasn't some clever prompt hack. I wrote a clear, structured reference guide—just like I would for a new team member. The training process took about an hour to set up initially, with another hour of refinement over the first week as I adjusted the instructions based on results.
Here's an excerpt from the workflow guide I provided to Augment:
🔁 Git Workflow & Version Control
We treat Git as a tool for narrating engineering decisions—not just storing code. Use it intentionally to reflect clarity, atomicity, and collaboration.
✅ Commit Philosophy
Commit early, commit often, but only once the change is coherent.
Each commit should answer: What changed, and why?
Prefer small, purposeful commits over monolithic ones.
🔤 Conventional Commit Format
We follow Conventional Commits:
<type>(optional-scope): short summary
[optional body]
[optional footer(s)]
Examples:
feat(auth): add OAuth login
fix(api): correct rate limit handling
chore(lint): update prettier config
Valid types: feat
, fix
, chore
, docs
, refactor
, test
, perf
, ci
🌱 Branch Naming Convention
Branches should reflect purpose and follow a type/slug
format:
feature/search-api
fix/token-refresh
chore/update-deps
🧭 GitHub Issue Tracking
We use GitHub Issues for all tracked work—features, bugs, ideas, spikes. Each issue answers: What are we doing, why does it matter, and how will we know it's done?
Issue Fields to Fill
Title – human-readable and emoji-tagged (e.g.
🚀 Add login flow
)Description – context, proposed approach, and acceptance criteria
Labels – use taxonomy below
Assignee – assign only when actively in progress
Milestone – for cycles/themes
Label Taxonomy
Use emojis in titles for quick scan: 🧠
, 🐛
, 🚀
, 📌
, etc.
That was it. No code injection. No plugin wiring. Just clear behavioral guidance that treated the AI as a team member who needed to understand our processes and values—and it picked them up faster than most junior developers I've worked with.
When implementing this yourself, note that:
You'll need a GitHub personal access token with appropriate permissions
Your AI assistant needs to understand API integration concepts
It helps to provide specific API endpoint references for GitHub
A real example: Before and after
Here's how this transformed my workflow in practice:
BEFORE (my old approach):
commit: updated api client
branch: fix-api
PR title: API fixes
Issue: Gemini API 404 error
AFTER (Augment-generated):
branch: fix/gemini-api-404-handling
commit: fix(api-client): handle 404 responses from non-existent model variants
body: When requesting models that don't exist, the API would throw uncaught 404s
footer: Fixes #87
PR title: 🐛 Fix Gemini API 404 error when requesting non-existent models
PR description:
## What
This PR adds proper error handling for 404 responses when the requested Gemini model variant doesn't exist.
## Why
Prevents uncaught exceptions when experimenting with new models.
## Testing
- Unit tests added for error handling path
- Manual verification with invalid model names
Fixes #87
Issue created by Augment:
Title: 🐛 Gemini API throws 404 on non-existent model requests
Description:
## Context
When requesting Gemini models that don't exist (e.g., 'gemini-1.5-pro-exp'),
the API client throws an unhandled 404 exception that crashes the application.
## Expected Behavior
The API client should gracefully handle 404 responses with appropriate
error messaging.
## Steps to Reproduce
1. Set API key in environment
2. Request non-existent model: `client.generateText('gemini-1.5-pro-exp', prompt)`
3. Observe uncaught exception
## Proposed Solution
Add specific error handling for 404 responses with clear error messages
that indicate the model doesn't exist.
Labels: type:bug, prio:high, theme:ai, size:s, status:to-do
The difference is striking. One approach is minimally functional; the other provides complete context, follows conventions, and maintains a clear narrative throughout the development process.
What it handles now
Now Augment handles:
Creating properly structured GitHub issues with appropriate labels and emojis
Establishing correctly named branches based on issue type
Committing with Conventional Commits format
Adding descriptive commit bodies when appropriate
Writing PR descriptions that explain what changed, why, and how to test
Linking everything together with issue references
All with minimal input. This has reduced my workflow overhead by approximately 35% over the past two weeks and dramatically improved our repository's documentation quality.
What it does better than I did
Here's what Augment now does consistently better than I used to:
Creates issues with proper taxonomies and labeling
Structures issues with context, reproduction steps, and acceptance criteria
Follows Conventional Commits religiously
Maintains emoji conventions in issue titles
Creates descriptive PRs with what/why/testing sections
Keeps branches organized and named correctly
Updates issue statuses as work progresses
This isn't about saving time.
It's about raising the floor on every piece of engineering hygiene and documenting decisions that otherwise get lost.
Handling edge cases: Decision logs and milestones
One fascinating area I added to the instructions was capturing architectural decisions. I included this guidance:
📌 Decision Logs
Capture important architectural decisions astype:decision
issues.
Title format:
📌 Decision: Move to Mastra
Rationale and trade-offs should be added in comments
Now when I tell Augment "Create a decision log about our choice to use Firebase Auth," it drafts a comprehensive decision record that captures my rationale, alternatives considered, and implementation implications—all tagged properly as a decision issue.
Similarly, I taught it to manage milestones as roadmap buckets, organizing related issues by sprint or theme. This created a living roadmap that evolves with the project instead of becoming stale documentation.
What it doesn't do (yet)
Here's the nuance:
Augment requires that I approve any third-party API request—including GitHub issues. That's a current autonomy limitation, and honestly? I'm fine with it. I get a quick review loop before anything goes live.
But Git?
That's handled entirely on its own. Commits, branches, messages—it's running full agent mode based on the patterns I taught it.
Could I let it take over everything? Yes. But I like the control plane where it counts.
Implementation considerations
If you're implementing this approach, keep in mind:
Context window limits: Your reference guide needs to fit within your AI's context window alongside actual work
Technical prerequisites: Your AI needs access to Git CLI commands and (optionally) GitHub's API
Training approach: Provide comprehensive documentation, not just isolated prompts
Tool selection: I've tested this with both Augment and Claude Code with similar results
Iteration period: Expect 1-2 weeks of refinement as you clarify instructions
The takeaway
You don't need AI to write your code. (Though it should help with the first 70%)
You need it to maintain the narrative around your code—with consistency, structure, and purpose. You don't have to build a custom toolchain. You just need to treat your agent like a real team member:
Provide comprehensive process documentation
Explain the "why" behind conventions
Let it own the narrative maintenance
Step in when high-impact decisions are needed
That's how I trained Augment to run Git and GitHub.
And I'm not going back.
This article is part of our "From the Trenches" series, featuring practical implementations of AI-powered development workflows. For more hands-on guides, check our archives.