Streamlining My Homelab Deployments with Bare Git Repositories and Git Hooks in code
I finally stopped fighting with my homelab deploy process and turned it into something that feels almost civilized.
The homelab: an old gaming desktop running Ubuntu, hosting a handful of services in Docker Compose. The goal: fast iteration without losing my mind or my work. It took a few generations to get here.
The First Era: SCP, Makefiles, and Madness
I wrote compose files locally, committed to GitHub for backup, SCP'd them over, SSH'd in, and ran the deploy by hand. For quick troubleshooting, that workflow was trash. I inevitably edited files directly on the server in vi/nano, and "often" became "very often."
Cumbersome != sustainable.
During this time I toyed with Github Actions to deploy on push and Gitlab CI/CD pipelines, but the friction of setting up runners, secrets, and debugging remote jobs made it more hassle than it was worth for a homelab box. I spent a number of hours chasing my tail trying to get those systems to work smoothly – this is something I do in my day job … and at-scale – but I could never keep myself focused enough to jump through all the hoops in my spare time. I'd much rather be playing video games, tbh.
The Second Era: Remote VS Code, Local Regret
I gave up for a while and just used VS Code remote editing on the server along with SSH to deploy by hand. Yes, it had better ergonomics, but I kept forgetting to sync changes back to my local repo (or Github), so the risk of losing work was high. And, it started to get in the way of using newer GenAI tools outside VS Code (hi, Codex) as those tools often didn't play well being run inside of an SSH session and I was too lazy to really chase down the causes. (This is a homelab, after all, and the purpose is to learn new things, not fight old ones.)
This is ugly. It's embarrassing. I hated it. It was easy. I loved it. I hated that I loved it. I'm lucky it didn't blow up in my face.
The Third Era: A New Dawn Arises
(Don't say I don't have a flair for the dramatic.)
I don't know why it took me so long to consider getting back to basics and relying on Git over SSH. I suppose I have been trained to think of gitops in a "Push to central repository and from there trigger events" mindset, but for a single homelab server, that came with such a heavy lift. My training for at-scale infrastructure has often blinded me to simpler solutions for smaller scale problems. I've forced myself to think in terms of Kubernetes, Terraform, and CI/CD pipelines when all I really needed was a way to push code and have it deploy. Even today, I'm thinking about some of those command-line driven pipeline systems that I could set up on the server, just to mentally smack my hand because it's more complex than it needs to be. (Let's face it, I'll still probably do that later, just for fun, but it'll be invoked through the git hook).
The simple solution is simple … push directly to the server with git over ssh. And hooks! I can use a hook on the server's repo to do things! And, oh! I see the output of that script in my push response! As soon as that last part clicked, it was obvious.
- On the server, I created a bare repo (
/srv/git/my-stack.git). - On my desktop, I set that bare repo as a remote for my working copy.
- The bare repo has a
post-receivehook that:- updates a checked-out working tree,
- runs
docker compose up -d(or whatever deploy command I need), - mirrors the repo to an external remote (currently GitLab).
Here's an example /srv/git/my-stack.git/hooks/post-receive hook (if you're trying this yourself, don't forget to chmod +x it):
#!/usr/bin/env bash
# Post-receive hook for the bare remote repo on three.
# Deploys main into /home/ricky/docker-apps/my-app-stack and restarts the stack.
set -euo pipefail
TARGET_REF="refs/heads/main"
# Don't judge me for having my work tree in my home dir ... it's a homelab!
WORK_TREE="/home/ricky/docker-apps/my-app-stack"
HOOK_DIR="$(cd "$(dirname "$0")" && pwd)"
GIT_DIR="${HOOK_DIR%/hooks}"
update_work_tree() {
git --git-dir="$GIT_DIR" --work-tree="$WORK_TREE" checkout -f main
}
restart_stack() {
cd "$WORK_TREE"
docker compose up -d --build
}
while read -r oldrev newrev ref; do
[ "$ref" = "$TARGET_REF" ] || continue
echo "Deploying $ref -> $newrev to $WORK_TREE"
update_work_tree
restart_stack
donePush once, deploy, and back up. No more stray edits.
The deploy script lives with the hook, so the server is the source of truth for how to run the stack. My local workflow stays normal Git: edit, git commit, git push server main, watch the hook do the rest. If I want history elsewhere, the hook's mirror keeps GitLab in sync without me thinking about it.
It's a kind of magic
All of this came about in the context of a new service I was creating for my homelab: an MCP server to expose read-only container details and logs. This service was a code-for-sport project. I've created a couple of MCP servers already, but nothing at home that I would consider keeping around for a while. One way my development workflow was failing me was that all these new GenAI tools couldn't be directed to assert control in safe ways. Deploying this new MCP server was the first of my homelab services that I built using this new Git-over-SSH-with-hooks workflow … and it surprised me once I started to get it even half-working.
Suddenly, Codex (my current client of choice) could see the code, understand the context of the deploy process, and suggest changes that I could quickly test by pushing to the server. I could even have Codex inspect logs and running containers through the MCP server. Its access was still limited – it could only use the read-only MCP server tools and then git push when I told it to, but the context this flow provided made the AI assistance much more effective.
Even if it did misbehave somehow (and let's face it, GenAI is still just a slightly-competent-recent-college-graduate-level-intern able to regurgitate some fairly sophisticated code that it saw once in a StackOverflow post … wait, is StackOverflow still a thing?!), the deployment process was still git-based, so I could still roll back changes or fix things manually if needed.
Onward
I've now moved all of my docker compose stacks to use this deployment process. It's already encouraged me to pick up some issues that I pushed off because they required a lot of deployments and the iteration cycle was too painful. I even gave codex a set of really complex instructions to troubleshoot a specific issue, hit enter, and went to dinner with my wife … I haven't trusted a GenAI tool that much before and even called it out to her (and she scoffed because she has a more-than-healthy skepticism of AI tools in general and I like that about her).
Honestly, seeing this whole setup work has rather changed my mind about GenAI. We still have to be careful about the kind of control we give these tools, but with the right context and guardrails, they can be a huge productivity boost.