<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Ricky Smith</title>
    <description>Staff Software Engineer at Cisco</description>
    <link>https://www.ricky-dev.com/</link>
    <atom:link href="https://www.ricky-dev.com/feed.xml" rel="self" type="application/rss+xml" />
    <pubDate>Sun, 11 Jan 2026 17:14:58 -0500</pubDate>
    <lastBuildDate>Sun, 11 Jan 2026 17:14:58 -0500</lastBuildDate>
    <generator>Jekyll v4.4.1</generator>
    
      <item>
        <title>Agentic Tooling Across Multiple Repositories</title>
        <description>&lt;p&gt;In my organization, we have a great number of repositories that contain overlapping logic. For example, to manage our large number of cloud environments, we have separate repositories just for Terraform. We&apos;ve split up our code based on purpose and audience to better organize our work. However, this creates challenges when we need to make a change that spans multiple repositories. This challenge has always existed, but it was manageable when doing work manually – you can just open the multiple repositories in your editor of choice and have multiple terminal tabs open for working with git in each repository.&lt;/p&gt;

&lt;p&gt;But as we lean into the use of agentic tooling (AI agents that can act on your behalf) to automate repetitive tasks – especially when we want those tools to perform broader changes that require context across multiple repositories – this becomes a bigger challenge. In this post I’ll walk through how I use git worktrees and a simple directory convention to give those agents the right context to work safely across many repos.&lt;/p&gt;

&lt;!--more--&gt;

&lt;h2 id=&quot;organizing-my-repositories&quot;&gt;Organizing My Repositories&lt;/h2&gt;

&lt;p&gt;Before we get into the solution, let&apos;s take a look at how I organize my repositories locally. My usual setup is to place all of my repositories in a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/projects&lt;/code&gt; directory structure, usually by organization, then by full project namespace that matches our remote repository.&lt;/p&gt;

&lt;p&gt;So, you&apos;d expect to see a structure like this on my laptop:&lt;/p&gt;

&lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;~/projects/
| - acme-corp/
| - | - infra-team/
| - | - | - terraform/account-provisioning/
| - | - | - infra-team/terraform/networking/
| - | - | - infra-team/terraform/networking-modules/
| - | - | - shared/terraform/service-resources/
| - | - | - shared/terraform/service-modules/
| - | - | - app-devs/frontend-app/
| - | - | - app-devs/backend-api/
| - | - some-product-team/backend-services/
| - public/some-open-source-repo/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;a-high-level-solution--directory-structure-with-git-worktrees&quot;&gt;A High-Level Solution – Directory Structure with Git Worktrees&lt;/h2&gt;

&lt;p&gt;My idea was to use git worktrees to create a single directory structure that contains multiple repositories, all synced to the same branch name. This way, I can create a workspace for a specific task that requires context across multiple repositories. The branch-named parent directory provides a space for defining context for the task at hand and provides a workspace for agentic tooling without polluting the individual repositories.&lt;/p&gt;

&lt;h2 id=&quot;git-worktrees&quot;&gt;Git Worktrees&lt;/h2&gt;

&lt;p&gt;Git worktrees let you create additional working directories for the same repository. Each worktree is tied to a branch and shares the same &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.git&lt;/code&gt; data, so you don’t need to reclone. Creating a worktree is fast and uses little disk space, and remote operations (fetch, pull, push) are shared across the main repo and all worktrees.&lt;/p&gt;

&lt;p&gt;The common use case is when you need to quickly pivot to working on another problem/feature, but you have a lot of WIP stuff in your working directory. Most developers I know would go about stashing their changes or creating a &quot;WIP&quot; commit, then switching branches. With git worktrees, you can just create a new working directory for the branch you want to work on and leave your main working directory alone. It&apos;s very handy, but not a tool that most people reach for often (maybe not often enough).&lt;/p&gt;

&lt;h2 id=&quot;git-worktree-directory-structure&quot;&gt;Git Worktree Directory Structure&lt;/h2&gt;

&lt;p&gt;In my new scheme, I&apos;ve started to create a directory structure that looks like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/projects/worktrees/&amp;lt;branch-name&amp;gt;/&amp;lt;repository-name&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So, if we think of a really broad task, like &quot;scan all Terraform modules using Trivy and fix any issues found&quot;, I might create a worktree structure like this:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;~/projects/worktrees/scan-fix-trivy-issues/
| - terraform-account-provisioning/
| - terraform-networking/
| - terraform-networking-modules/
| - terraform-service-resources/
| - terraform-service-modules/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This has some really big benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Each task can include (or not) repositories that are needed&lt;/li&gt;
  &lt;li&gt;All repositories are checked out to the same branch name, making merge-request/pull-request tracking easier.&lt;/li&gt;
  &lt;li&gt;The parent directory (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;scan-fix-trivy-issues&lt;/code&gt;) provides a workspace for the task, outside of any one repository&lt;/li&gt;
  &lt;li&gt;Agentic tools can be run from the parent directory, providing a single context for all repositories involved in the task.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;giving-context&quot;&gt;Giving Context&lt;/h2&gt;

&lt;p&gt;The big &quot;aha&quot; moment was that the parent directory (the branch name) provides a context for the task at hand. This is really powerful when using agentic tooling, because the tooling can be run from the parent directory, and it can operate on all repositories within that context. &lt;strong&gt;So far, I&apos;m starting my tasks with two key files: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;AGENTS.md&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PLAN.md&lt;/code&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;AGENTS.md&lt;/code&gt; file is already a well-known pattern for providing seed context for agentic tooling. By placing this file in the parent directory, I can give high-level information about the directory structure and instructions for the agents to read &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;AGENTS.md&lt;/code&gt; files in each repository, and to reference the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PLAN.md&lt;/code&gt; file for the overall plan.&lt;/p&gt;

&lt;div class=&quot;language-markdown highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;This directory contains multiple repositories related to scanning and fixing Trivy issues in our Terraform modules.
Each repository is checked out to the same branch name: &lt;span class=&quot;sb&quot;&gt;`scan-fix-trivy-issues`&lt;/span&gt;.
Follow the plan outlined in the &lt;span class=&quot;sb&quot;&gt;`PLAN.md`&lt;/span&gt; file to complete the task.
For each repository, please read the &lt;span class=&quot;sb&quot;&gt;`AGENTS.md`&lt;/span&gt; file located in the root of the repository for specific instructions related to that repository.
When making changes, please create git commits with clear messages, push changes to the remote repository, and create merge requests as needed.
Track all created merge requests in a &lt;span class=&quot;sb&quot;&gt;`MERGE_REQUESTS.md`&lt;/span&gt; file in this parent directory.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In my &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PLAN.md&lt;/code&gt; file, I&apos;ve found that providing a high-level outline of the steps, and a checklist of the steps to perform, helps the agent stay on track and also provides a way for the agent to keep track of progress. You&apos;re likely to pause the overall task execution to focus on a specific issue, to push changes and address CI/CD results, or if you need to reboot your computer. At any time, you can just tell your agent to &quot;continue the plan&quot; and it can pick up where it left off.&lt;/p&gt;

&lt;div class=&quot;language-markdown highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gh&quot;&gt;# Plan: Scan and fix Trivy issues&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;
-&lt;/span&gt; [ ] Scan all Terraform modules with Trivy
&lt;span class=&quot;p&quot;&gt;-&lt;/span&gt; [ ] Triage findings and propose fixes
&lt;span class=&quot;p&quot;&gt;-&lt;/span&gt; [ ] Apply fixes and run &lt;span class=&quot;sb&quot;&gt;`terraform validate`&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;-&lt;/span&gt; [ ] Open merge requests and record them in &lt;span class=&quot;sb&quot;&gt;`MERGE_REQUESTS.md`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;shell-alias-for-creating-worktrees&quot;&gt;Shell Alias for Creating Worktrees&lt;/h3&gt;

&lt;p&gt;Now, one of the pain points of working with git worktrees is that the commands can be a bit verbose. To make it easier to create these worktree structures, I&apos;ve created a few ZSH aliases that automate the process, especially focusing on this workflow.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Create a new worktree for a new branch&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Usage: wt-new &amp;lt;branch-name&amp;gt;&lt;/span&gt;
wt-new&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-z&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$1&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
        &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Error: Branch name required&quot;&lt;/span&gt;
        &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Usage: wt-new &amp;lt;branch-name&amp;gt;&quot;&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;return &lt;/span&gt;1
    &lt;span class=&quot;k&quot;&gt;fi
    
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;local &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;branch_name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$1&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
    &lt;span class=&quot;nb&quot;&gt;local &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;repo_name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;basename&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;git rev-parse &lt;span class=&quot;nt&quot;&gt;--show-toplevel&lt;/span&gt; 2&amp;gt;/dev/null&lt;span class=&quot;si&quot;&gt;))&lt;/span&gt;
    
    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-z&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$repo_name&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
        &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Error: Not in a git repository&quot;&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;return &lt;/span&gt;1
    &lt;span class=&quot;k&quot;&gt;fi
    
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;local &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;worktree_path&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/projects/worktrees/&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$branch_name&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$repo_name&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
    
    &lt;span class=&quot;c&quot;&gt;# Create the directory structure if it doesn&apos;t exist&lt;/span&gt;
    &lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/projects/worktrees/&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$branch_name&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
    
    &lt;span class=&quot;c&quot;&gt;# Create the worktree with a new branch&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;if &lt;/span&gt;git worktree add &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$branch_name&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$worktree_path&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
        &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;✓ Worktree created at: &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$worktree_path&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
        
        &lt;span class=&quot;c&quot;&gt;# Copy all .envrc files from the source repository&lt;/span&gt;
        &lt;span class=&quot;nb&quot;&gt;local &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;source_repo&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;git rev-parse &lt;span class=&quot;nt&quot;&gt;--show-toplevel&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;nb&quot;&gt;local &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;envrc_files&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;find &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$source_repo&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-name&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;.envrc&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-not&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-path&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;*/&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\.&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;git/*&quot;&lt;/span&gt; 2&amp;gt;/dev/null&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
        
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$envrc_files&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
            &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$envrc_files&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; | &lt;span class=&quot;k&quot;&gt;while &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;read&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-r&lt;/span&gt; envrc_file&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
                &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;local &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;rel_path&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;envrc_file&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;#&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$source_repo&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
                &lt;span class=&quot;nb&quot;&gt;local &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;target_dir&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$worktree_path&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;dirname&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$rel_path&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
                &lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$target_dir&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
                &lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$envrc_file&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$worktree_path&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$rel_path&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
                &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;✓ Copied &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$rel_path&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;done
        fi
        
        &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$worktree_path&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;else
        &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;✗ Failed to create worktree&quot;&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;return &lt;/span&gt;1
    &lt;span class=&quot;k&quot;&gt;fi&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This alias, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;wt-new&lt;/code&gt;, takes a branch name as an argument, creates the necessary directory structure, and sets up a new worktree for the current repository. It also copies any &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.envrc&lt;/code&gt; files from the source repository to the new worktree, ensuring that environment configurations are preserved.&lt;/p&gt;

&lt;h2 id=&quot;the-real-magic&quot;&gt;The Real Magic&lt;/h2&gt;

&lt;p&gt;Now, if you&apos;re paying attention, you might think this is a pretty nice way to create a per-task structure, but there&apos;s some extra steps that &lt;em&gt;REALLY&lt;/em&gt; turn this whole thing into pure magic.&lt;/p&gt;

&lt;p&gt;With this directory layout in place, agentic tooling gets really interesting when you add a few conventions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Instructing the agent to create git commits as it works&lt;/li&gt;
  &lt;li&gt;Instructing the agent to push changes to the remote repository when it completes a step (while specifying git push options to automatically create a merge-request on the remote)&lt;/li&gt;
  &lt;li&gt;Instructing the agent to keep track of merge requests in a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MERGE_REQUESTS.md&lt;/code&gt; file in the parent directory&lt;/li&gt;
  &lt;li&gt;Giving the agent access to CI/CD pipelines so it can see output of tests, scans, and other jobs (through an MCP server, for example)&lt;/li&gt;
  &lt;li&gt;Instructing the agent to document instructions I provide in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;./AGENTS.md&lt;/code&gt; file, so that any refinements I give it during the chat session are captured (I find this helps to keep important instructions in the AI context)&lt;/li&gt;
  &lt;li&gt;Have a tool like pre-commit already running on the repositories – the agent will see the pre-commit failures and fix them as it goes&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;demonstrating-the-workflow&quot;&gt;Demonstrating the Workflow&lt;/h2&gt;

&lt;p&gt;To demonstrate the workflow, let&apos;s say I want to create a new worktree for the branch &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;scan-fix-trivy-issues&lt;/code&gt; for a few repositories, I would navigate to each repository and run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;wt-new scan-fix-trivy-issues&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ~/projects/acme-corp/infra-team/terraform/account-provisioning
wt-new scan-fix-trivy-issues
&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ~/projects/acme-corp/infra-team/terraform/networking
wt-new scan-fix-trivy-issues
&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ~/projects/acme-corp/infra-team/terraform/networking-modules
wt-new scan-fix-trivy-issues
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, create my &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;AGENTS.md&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PLAN.md&lt;/code&gt; files in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;scan-fix-trivy-issues&lt;/code&gt; directory to provide context for the task.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;~/projects/worktrees/scan-fix-trivy-issues/
| - terraform-account-provisioning/
| - terraform-networking/
| - terraform-networking-modules/
| - AGENTS.md
| - PLAN.md
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;From there, I can just run my favorite AI tool (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;codex&lt;/code&gt;) in the parent directory and prompt it to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Get started&lt;/code&gt;. It will read the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;AGENTS.md&lt;/code&gt; file by default, which tells it to read the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PLAN.md&lt;/code&gt; file and follow the steps outlined there.&lt;/p&gt;

&lt;p&gt;At some key points, I&apos;ll instruct the agent to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push changes&lt;/code&gt;, which will cause it to create commits, push them to the remote repository, and create merge requests automatically and log them in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MERGE_REQUESTS.md&lt;/code&gt; file. I can tell it to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Check the status of the merge request pipelines&lt;/code&gt; and it&apos;ll go off and look at the output of the CI/CD jobs and get to work on troubleshooting failures.&lt;/p&gt;

&lt;h2 id=&quot;early-results&quot;&gt;Early Results&lt;/h2&gt;

&lt;p&gt;I&apos;ve only been using this workflow for a couple of days, but it&apos;s been extremely promising. There have been a few key moments where the agent has (rightly) indicated that it might cause an outage with a change and I&apos;ve been able to pull in additional repositories (like our repositories that manage helm charts and environment-specific input values) to give it more context to make more &quot;informed decisions&quot;.&lt;/p&gt;

&lt;p&gt;I also very much like that I have given it a large sandbox where it can play safely. The worktrees are all created with a branch, so there&apos;s a low risk that I&apos;ll have the wrong branch checked out. I also don&apos;t have to worry about a lot of pending changes in my main working directories and it getting in the way of other work. In fact, I&apos;ve found myself already setting up this context, setting the agent off to do work, and then I&apos;ll switch my focus to something else in the main working directory. As someone who has been very hesitant to relinquish control to AI tools, this has been a really nice way to reap the benefits while still having some confidence that it&apos;s not going to make a huge mess.&lt;/p&gt;

&lt;p&gt;(A human wrote this blog post, but GPT-5.1 was used for editing assistance. Use of emdashes was a human choice. 😄)&lt;/p&gt;
</description>
        <pubDate>Sat, 10 Jan 2026 00:00:00 -0500</pubDate>
        <lastmod>Sat, 10 Jan 2026 00:00:00 -0500</lastmod>
        <link>https://www.ricky-dev.com/coding/2026/01/agentic-tooling-across-multiple-repositories/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/coding/2026/01/agentic-tooling-across-multiple-repositories/</guid>
        
        <category>ai - agentic</category>
        
        <category>git</category>
        
        <category>tooling</category>
        
        <category>worktrees</category>
        
        <category>shell-scripts</category>
        
        
        <category>coding</category>
        
      </item>
    
      <item>
        <title>Streamlining My Homelab Deployments with Bare Git Repositories and Git Hooks</title>
        <description>&lt;p&gt;I finally stopped fighting with my homelab deploy process and turned it into something that feels almost civilized.&lt;/p&gt;

&lt;!--more--&gt;

&lt;p&gt;The homelab: an old gaming desktop running Ubuntu, hosting a handful of services in Docker Compose. The goal: fast iteration without losing my mind or my work. It took a few generations to get here.&lt;/p&gt;

&lt;h2 id=&quot;the-first-era-scp-makefiles-and-madness&quot;&gt;The First Era: SCP, Makefiles, and Madness&lt;/h2&gt;

&lt;p&gt;I wrote compose files locally, committed to GitHub for backup, SCP&apos;d them over, SSH&apos;d in, and ran the deploy by hand. For quick troubleshooting, that workflow was trash. I inevitably edited files directly on the server in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vi&lt;/code&gt;/&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nano&lt;/code&gt;, and &quot;often&quot; became &quot;very often.&quot;&lt;/p&gt;

&lt;p class=&quot;callout&quot;&gt;Cumbersome != sustainable.&lt;/p&gt;

&lt;p&gt;During this time I toyed with Github Actions to deploy on push and Gitlab CI/CD pipelines, but the friction of setting up runners, secrets, and debugging remote jobs made it more hassle than it was worth for a homelab box. I spent a number of hours chasing my tail trying to get those systems to work smoothly – this is something I do in my day job … and at-scale – but I could never keep myself focused enough to jump through all the hoops in my spare time. I&apos;d much rather be playing video games, tbh.&lt;/p&gt;

&lt;h2 id=&quot;the-second-era-remote-vs-code-local-regret&quot;&gt;The Second Era: Remote VS Code, Local Regret&lt;/h2&gt;

&lt;p&gt;I gave up for a while and just used VS Code remote editing on the server along with SSH to deploy by hand. Yes, it had better ergonomics, but I kept forgetting to sync changes back to my local repo (or Github), so the risk of losing work was high. And, it started to get in the way of using newer GenAI tools outside VS Code (hi, Codex) as those tools often didn&apos;t play well being run inside of an SSH session and I was too lazy to really chase down the causes. (This is a homelab, after all, and the purpose is to learn new things, not fight old ones.)&lt;/p&gt;

&lt;p&gt;This is ugly. It&apos;s embarrassing. I hated it. It was easy. I loved it. I hated that I loved it. I&apos;m lucky it didn&apos;t blow up in my face.&lt;/p&gt;

&lt;h2 id=&quot;the-third-era-a-new-dawn-arises&quot;&gt;The Third Era: A New Dawn Arises&lt;/h2&gt;

&lt;p&gt;(Don&apos;t say I don&apos;t have a flair for the dramatic.)&lt;/p&gt;

&lt;p&gt;I don&apos;t know why it took me so long to consider getting back to basics and relying on Git over SSH. I suppose I have been trained to think of gitops in a &quot;Push to central repository and from there trigger events&quot; mindset, but for a single homelab server, that came with such a heavy lift. My training for at-scale infrastructure has often blinded me to simpler solutions for smaller scale problems. I&apos;ve forced myself to think in terms of Kubernetes, Terraform, and CI/CD pipelines when all I really needed was a way to push code and have it deploy. Even today, I&apos;m thinking about some of those command-line driven pipeline systems that I could set up on the server, just to mentally smack my hand because it&apos;s more complex than it needs to be. (Let&apos;s face it, I&apos;ll still probably do that later, just for fun, but it&apos;ll be invoked through the git hook).&lt;/p&gt;

&lt;p&gt;The simple solution is simple … push directly to the server with git over ssh. And hooks! I can use a hook on the server&apos;s repo to do things! And, oh! I see the output of that script in my push response! As soon as that last part clicked, it was obvious.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;On the server, I created a bare repo (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/srv/git/my-stack.git&lt;/code&gt;).&lt;/li&gt;
  &lt;li&gt;On my desktop, I set that bare repo as a remote for my working copy.&lt;/li&gt;
  &lt;li&gt;The bare repo has a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;post-receive&lt;/code&gt; hook that:
    &lt;ul&gt;
      &lt;li&gt;updates a checked-out working tree,&lt;/li&gt;
      &lt;li&gt;runs &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker compose up -d&lt;/code&gt; (or whatever deploy command I need),&lt;/li&gt;
      &lt;li&gt;mirrors the repo to an external remote (currently GitLab).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here&apos;s the flow end-to-end:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-mermaid&quot;&gt;sequenceDiagram
  participant Dev as Dev machine
  participant Bare as Bare repo on homelab
  participant Hook as post-receive hook
  participant WorkTree as Checked-out work tree
  participant Stack as Docker Compose stack
  participant Mirror as External remote (e.g. GitLab)

  Dev-&amp;gt;&amp;gt;Bare: git push server main
  Bare--&amp;gt;&amp;gt;Dev: push output (including hook logs)

  Bare-&amp;gt;&amp;gt;Hook: trigger post-receive
  Hook-&amp;gt;&amp;gt;WorkTree: git checkout -f main
  Hook-&amp;gt;&amp;gt;Stack: docker compose up -d --build
  Hook-&amp;gt;&amp;gt;Mirror: git push mirror

  Stack--&amp;gt;&amp;gt;Dev: logs, status (via MCP / CLI)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Here&apos;s an example &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/srv/git/my-stack.git/hooks/post-receive&lt;/code&gt; hook (if you&apos;re trying this yourself, don&apos;t forget to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;chmod +x&lt;/code&gt; it):&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Post-receive hook for the bare remote repo on three.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Deploys main into /home/ricky/docker-apps/my-app-stack and restarts the stack.&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;set&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-euo&lt;/span&gt; pipefail

&lt;span class=&quot;nv&quot;&gt;TARGET_REF&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;refs/heads/main&quot;&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Don&apos;t judge me for having my work tree in my home dir ... it&apos;s a homelab!&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;WORK_TREE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/home/ricky/docker-apps/my-app-stack&quot;&lt;/span&gt;

&lt;span class=&quot;nv&quot;&gt;HOOK_DIR&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;dirname&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$0&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;GIT_DIR&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;HOOK_DIR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;%/hooks&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;

update_work_tree&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
  git &lt;span class=&quot;nt&quot;&gt;--git-dir&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$GIT_DIR&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--work-tree&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$WORK_TREE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; checkout &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; main
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

restart_stack&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$WORK_TREE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
  docker compose up &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--build&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;while &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;read&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-r&lt;/span&gt; oldrev newrev ref&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$ref&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$TARGET_REF&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;||&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;continue
  &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Deploying &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$ref&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt; -&amp;gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$newrev&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt; to &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$WORK_TREE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
  update_work_tree
  restart_stack
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p class=&quot;callout&quot;&gt;Push once, deploy, and back up. No more stray edits.&lt;/p&gt;

&lt;p&gt;The deploy script lives with the hook, so the server is the source of truth for how to run the stack. My local workflow stays normal Git: edit, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git commit&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git push server main&lt;/code&gt;, watch the hook do the rest. If I want history elsewhere, the hook&apos;s mirror keeps GitLab in sync without me thinking about it.&lt;/p&gt;

&lt;h3 id=&quot;its-a-kind-of-magic&quot;&gt;It&apos;s a kind of magic&lt;/h3&gt;

&lt;p&gt;All of this came about in the context of a new service I was creating for my homelab: an MCP server to expose read-only container details and logs. This service was a code-for-sport project. I&apos;ve created a couple of MCP servers already, but nothing at home that I would consider keeping around for a while. One way my development workflow was failing me was that all these new GenAI tools couldn&apos;t be directed to assert control in safe ways. Deploying this new MCP server was the first of my homelab services that I built using this new Git-over-SSH-with-hooks workflow … and it surprised me once I started to get it even half-working.&lt;/p&gt;

&lt;p&gt;Suddenly, Codex (my current client of choice) could see the code, understand the context of the deploy process, and suggest changes that I could quickly test by pushing to the server. I could even have Codex inspect logs and running containers through the MCP server. Its access was still limited – it could only use the read-only MCP server tools and then git push when I told it to, but the context this flow provided made the AI assistance much more effective.&lt;/p&gt;

&lt;p&gt;Even if it did misbehave somehow (and let&apos;s face it, GenAI is still just a slightly-competent-recent-college-graduate-level-intern able to regurgitate some fairly sophisticated code that it saw once in a StackOverflow post … wait, is StackOverflow still a thing?!), the deployment process was still git-based, so I could still roll back changes or fix things manually if needed.&lt;/p&gt;

&lt;h2 id=&quot;onward&quot;&gt;Onward&lt;/h2&gt;

&lt;p&gt;I&apos;ve now moved all of my docker compose stacks to use this deployment process. It&apos;s already encouraged me to pick up some issues that I pushed off because they required a lot of deployments and the iteration cycle was too painful. I even gave codex a set of really complex instructions to troubleshoot a specific issue, hit enter, and went to dinner with my wife … I haven&apos;t trusted a GenAI tool that much before and even called it out to her (and she scoffed because she has a more-than-healthy skepticism of AI tools in general and I like that about her).&lt;/p&gt;

&lt;p&gt;Honestly, seeing this whole setup work has rather changed my mind about GenAI. We still have to be careful about the kind of control we give these tools, but with the right context and guardrails, they can be a huge productivity boost.&lt;/p&gt;
</description>
        <pubDate>Fri, 19 Dec 2025 00:00:00 -0500</pubDate>
        <lastmod>Fri, 19 Dec 2025 00:00:00 -0500</lastmod>
        <link>https://www.ricky-dev.com/code/2025/12/streamlined-homelab-deployments/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/code/2025/12/streamlined-homelab-deployments/</guid>
        
        <category>homelab</category>
        
        <category>docker</category>
        
        <category>git</category>
        
        <category>ai</category>
        
        
        <category>code</category>
        
      </item>
    
      <item>
        <title>Avoiding Common Pitfalls in Terraform Module Design</title>
        <description>&lt;p&gt;I&apos;ve been working with Terraform for over 6 years now, and during this time, I&apos;ve encountered several common mistakes when designing modules that can lead to frustrations, mismanagement, and even mistrust in the tool itself.&lt;/p&gt;

&lt;!--more--&gt;

&lt;p&gt;I continue to see teams complaining that Terraform is overly complex, hard to maintain, doesn&apos;t scale well, and – the most common complaint – that state files are unreliable and need constant manual intervention. That last point, especially, leads teams to fight against Terraform rather than working with it. In some cases, teams abandon the use of CICD processes altogether, opting instead to run Terraform commands manually, which only leads to more problems (not to mention a compliance and security nightmare).&lt;/p&gt;

&lt;p&gt;Before I go into more detail, I want to clarify that I will be using the terms &quot;&lt;em&gt;root module&lt;/em&gt;&quot; and &quot;&lt;em&gt;child module&lt;/em&gt;&quot; throughout this post. A root module is the top-level configuration in a Terraform project, while a child module is a reusable component that can be called from multiple root modules. A root module would define provider and backend configuration, while a child module would not.&lt;/p&gt;

&lt;h2 id=&quot;poor-module-naming&quot;&gt;Poor Module Naming&lt;/h2&gt;
&lt;p&gt;One of the most common mistakes I see is poor module naming (especially root modules). Modules should be named by their business purpose, not by the resource type they manage.&lt;/p&gt;

&lt;p&gt;For example, a module that manages an S3 bucket for storing user uploads should be named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;user-uploads-bucket&lt;/code&gt; rather than just &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;s3-bucket&lt;/code&gt;. This makes it clear what the module is for and helps avoid confusion when multiple modules manage similar resources. It also makes it easier to understand the overall architecture of the infrastructure at a glance when you see the root modules invoked for an environment.&lt;/p&gt;

&lt;p&gt;Even when creating a truly reusable child module, the name should reflect its purpose. For example, if I were creating a module that would encapsulate the default configuration for all S3 buckets in my organization that adhered to best practices and compliance requirements, I would name it something like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;compliant-s3-bucket&lt;/code&gt;. This makes it clear that the module is not just a generic S3 bucket but one that meets specific organizational standards. That module would then be invoked by a &quot;static-website-bucket&quot; module or a &quot;user-uploads-bucket&quot; module, etc.&lt;/p&gt;

&lt;h2 id=&quot;poor-module-scope&quot;&gt;Poor Module Scope&lt;/h2&gt;

&lt;p&gt;This goes hand-in-hand with poor module naming. Modules should have a clear and focused scope. A module that tries to do too much becomes difficult to understand, maintain, and reuse.&lt;/p&gt;

&lt;p&gt;I can&apos;t count how many times I&apos;ve seen an engineer adding a variable to a module to add some new functionality that has already had a dozen engineers do the same before them; each shovelling on more complexity and scope. The result is always the same: a module that nobody understands, nobody wants to use, and nobody wants to maintain. We only really notice these modules once they&apos;ve reached the point of being unmanageable, and by then, it&apos;s often easier to rewrite them from scratch than to try and untangle the mess.&lt;/p&gt;

&lt;p&gt;I think of modules a lot like microservices. Each module should have a single responsibility and do it well. If a module is responsible for managing a VPC, it shouldn&apos;t also be responsible for managing EC2 instances or RDS databases. Those should be separate modules that can be composed together as needed. If you find yourself adding more and more variables to a module, it&apos;s a sign that the module&apos;s scope is too broad and needs to be broken down into smaller, more focused modules.&lt;/p&gt;

&lt;p&gt;That may sound like I&apos;m advocating for incredibly small modules, and in some cases, I am. But the key is that each module should have a clear purpose and be easy to understand. A module that encapsulates a single cloud resource is likely too small, but a module that encapsulates a single business function (like &quot;user authentication&quot; or &quot;payment processing&quot;) is likely just right.&lt;/p&gt;

&lt;p&gt;Following this principle will also make naming the module easier – if you can name the purpose, you can define the scope and you can give it a good name.&lt;/p&gt;

&lt;h2 id=&quot;modules-are-too-flexible&quot;&gt;Modules are too flexible&lt;/h2&gt;

&lt;p&gt;I see a lot of recommendations and &quot;best practices&quot; that advocate for making modules as flexible as possible. They don&apos;t use those words directly, but they recommend that everything should be a variable, and that modules should be designed to handle a wide variety of use cases. And I see engineers taking this to the absolute extreme, creating modules with dozens of variables, many of which are complex objects or maps.&lt;/p&gt;

&lt;p&gt;A good example is the creation of an IAM role. I often see a child module that creates an AWS IAM role in a generic way, but then also has variables for the definition of the role&apos;s policies, trust relationships, tags, and even the ability to attach managed policies. The result is a module that is so flexible that it can be used for almost any purpose, but it&apos;s also incredibly complex and difficult to use. It&apos;s also pretty meaningless.&lt;/p&gt;

&lt;p&gt;In these cases, I typically create a module that creates a role with a specific purpose, such as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;service-irsa-role&lt;/code&gt;. The module would have a few variables, such as the role name (but probably not even directly given the role name .. I&apos;d probably have other business-relevant variables that would be used to construct the role name, like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;environment&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;application&lt;/code&gt;, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;component&lt;/code&gt;), But I would avoid making the module too flexible by allowing for arbitrary policies or managed policy attachments. And I would absolutely NOT allow for the trust relationship to be defined as a variable. The trust relationship should be fixed based on the purpose of the module (in this case for IRSA). When the service using this role needs additional permissions, the &lt;em&gt;root module&lt;/em&gt; that invokes this child module can create and attach additional policies as needed. This allows the child module to remain focused and easy to use while still allowing for flexibility in the root module – and the root module can be invoked in different environments for deploying the same application in dev, staging, and production.&lt;/p&gt;

&lt;p&gt;Given this opinion, you may not be surprised to hear that I avoid using most modules from the Terraform Registry. I find that most of them are too generic and flexible, making them difficult to use and understand. I&apos;ll often use them as a reference to determine which resources should be included in my own module. I prefer to create my own modules that are focused on specific business purposes and have a clear scope. This also allows me to enforce organizational standards and best practices more easily.&lt;/p&gt;

&lt;h2 id=&quot;exploiting-terragrunts-hooks&quot;&gt;Exploiting Terragrunt&apos;s hooks&lt;/h2&gt;

&lt;p&gt;While this isn&apos;t exactly a Terraform mistake, I see a lot of teams using Terragrunt&apos;s hooks to run custom scripts or commands before or after Terraform commands. These teams often don&apos;t agree with the idea of using static and simple Terraform modules, and instead want to run custom logic to modify the state or configuration before applying changes. In my experience, they usually have a coding/scripting approach to Terraform, rather than a declarative &quot;configuration&quot; approach. They&apos;ll exploit Terragrunt&apos;s hooks to clean up state or modify configuration files.&lt;/p&gt;

&lt;p&gt;I see this as an exploitation. They&apos;re fighting against the nature of Terraform, trying to bend it to match their mindset and not fully understanding how Terraform is designed to work. The worst part is that they won&apos;t see how great Terraform can be when used correctly because they&apos;re so focused on trying to make it work their way. These teams usually end up with brittle, hard-to-maintain infrastructure that requires constant manual intervention and breaks frequently and they end up distrusting Terraform as a whole.&lt;/p&gt;

&lt;p&gt;Granted, I&apos;m not against Terragrunt hooks (or even Terragrunt&apos;s code generation features), but I think they should be used sparingly and only when absolutely necessary.&lt;/p&gt;

&lt;h2 id=&quot;death-by-a-thousand-cuts&quot;&gt;Death by a thousand cuts&lt;/h2&gt;

&lt;p&gt;Developer discipline is a big subject and very much applies to Terraform. Developers are constantly weighing the cost of doing something &quot;the right way&quot; versus doing it &quot;the easy way&quot; or &quot;the quick way&quot;.&lt;/p&gt;

&lt;p&gt;It&apos;s so easy to see a module called &quot;elasticache&quot; that is used to create Elasticache Redis clusters and modify it to also be able to create Memcached clusters. Or to add a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;list(object)&lt;/code&gt; variable to a VPC module to allow for the creation of additional subnets. Or even just a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;list(string)&lt;/code&gt; variable to allow for additional tags. Each of these changes may seem small and insignificant on their own, but over time, they add up and lead to a module that is so complex and difficult to use that nobody wants to use it anymore.&lt;/p&gt;
</description>
        <pubDate>Sat, 27 Sep 2025 00:00:00 -0400</pubDate>
        <lastmod>Sat, 27 Sep 2025 00:00:00 -0400</lastmod>
        <link>https://www.ricky-dev.com/coding/2025/09/terraform-pitfalls/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/coding/2025/09/terraform-pitfalls/</guid>
        
        
        <category>coding</category>
        
      </item>
    
      <item>
        <title>The Subtle Differences in Apply-Before-Merge and Apply-After-Merge</title>
        <description>&lt;p&gt;My infrastructure team, as part of a larger Terraform Restructure initiative, is moving our Terraform repositories to be apply-after-merge. There are significant changes in process between these two approaches; not all are obvious.&lt;/p&gt;

&lt;!--more--&gt;

&lt;p&gt;I won’t try to hide: I like Apply-After-Merge more. I will attempt to show the strengths and problems with both processes, but you should know that I have already come to an opinion.&lt;/p&gt;

&lt;p&gt;Let’s start by looking at the two processes side-by-side, what humans need to do, and what the CICD pipeline will do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Apply-Before-Merge (ABM)&lt;/strong&gt; process is like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;em&gt;The submitter&lt;/em&gt; create a Merge Request (MR)&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The CICD pipelin&lt;/em&gt;e:
    &lt;ol&gt;
      &lt;li&gt;Creates a plan for the required changes and logs it for review (&lt;strong&gt;Plan Phase&lt;/strong&gt;)&lt;/li&gt;
      &lt;li&gt;Waits for manual intervention&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The reviewer&lt;/em&gt; looks at the code changes and the plan file&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The reviewer&lt;/em&gt; clicks the &quot;Apply&quot; button to trigger &lt;em&gt;the pipeline&lt;/em&gt; to continue&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The pipeline&lt;/em&gt; fetches the persisted plan file from step 2 and executes the plan (&lt;strong&gt;Apply Phase&lt;/strong&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The pipeline&lt;/em&gt; merges the MR into the default branch&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Apply-After-Merge (AAM)&lt;/strong&gt; process looks like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;em&gt;The submitter&lt;/em&gt; creates a MR&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The CICD pipeline&lt;/em&gt; creates a plan for the required changes and logs it for review&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The reviewer&lt;/em&gt; looks at the code changes and the plan file&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The reviewer&lt;/em&gt; clicks the &quot;Merge&quot; button and the code is merged into the default branch&lt;/li&gt;
  &lt;li&gt;&lt;em&gt;The pipeline&lt;/em&gt; – using the default branch – creates and executes a plan (&lt;strong&gt;Apply Phase&lt;/strong&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now keep in mind that CICD pipelines are async processes – there will be multiple, conflicting, merge requests in various stages of the process at any given time.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;h2 id=&quot;everybody-has-a-plan-until-they-get-punched-in-the-mouth--mike-tyson&quot;&gt;&quot;Everybody has a plan until they get punched in the mouth&quot; – Mike Tyson&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;p&gt;In each approach, we need to create a plan file prior to MR review, but in ABM the plan file has another purpose: that plan file will be the exact changes made during the apply step. That means that the plan file needs to be stored somewhere so that it can be retrieved later (during the Apply Phase). It also means that a plan has the opportunity to &lt;em&gt;go stale&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The storage of a process-critical, but still ephemeral, bit of data can be tricky to get right and reliable. In our case, we’re using GitLab CI caching, but that comes with potential pitfalls – the cache entries need a key that’s scoped to the MR, are there potential race conditions that could cause the wrong plan file to occupy the cache? What happens if the cache entry is evicted? Many times these questions go unasked.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;h2 id=&quot;the-starting-point-for-all-achievement-is-desire--napoleon-hill&quot;&gt;&quot;The starting point for all achievement is desire.&quot; – Napoleon Hill&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;p&gt;A much more subtle difference is in the question &lt;em&gt;where is the desired state&lt;/em&gt;? And also, &lt;em&gt;what does the default branch represent?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In Terraform, you are always dealing with two states: the desired state and the state of reality. When Terraform is creating a plan, the first thing it does is investigate the &lt;em&gt;state of reality&lt;/em&gt; (what resources exist in the target environment and how are they configured), compare that to the &lt;em&gt;desired state&lt;/em&gt; (the terraform code, in totality), and build a series of steps to change reality so that it matches the &lt;em&gt;desired state&lt;/em&gt;. The plan file is that set of steps.&lt;/p&gt;

&lt;p&gt;So, in ABM, where is the &lt;em&gt;desired state&lt;/em&gt; stored? It’s not in the default branch – you’ve already applied a new &lt;em&gt;desired state&lt;/em&gt; to the environment before merging the code. Does that mean it’s the unmerged branch? What happens if you have multiple unmerged branches? Do you have multiple &lt;em&gt;desired state&lt;/em&gt;s?&lt;/p&gt;

&lt;p&gt;This also leads you to think that the default branch represents reality when using ABM – but that’s incorrect. It’s common for resources created by Terraform to be modified outside of Terraform. It’s commonplace for reality to morph in ways that’s entirely appropriate. Or, even if your environment shouldn’t change outside of your Terraform code, there’s no guarantee that it hasn’t changed. Your default branch becomes a Schrödinger’s cat – until you run a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;plan&lt;/code&gt; it both does and does not represent reality.&lt;/p&gt;

&lt;p&gt;In fact, in ABM, the default branch represents neither reality (reliably), nor the &lt;em&gt;desired state&lt;/em&gt;. If you squint and try real hard, you could say that the default branch represents &quot;the last applied and quickly and successfully merged &lt;em&gt;desired state&lt;/em&gt; assuming there are no other MRs in or after the Apply Stage that more accurately describe the &lt;em&gt;desired state&lt;/em&gt;.&quot; That’s a LOT of caveats.&lt;/p&gt;

&lt;p&gt;In AAM, the &lt;em&gt;desired state&lt;/em&gt; is a lot more clear: it’s the default branch. Full stop. The default branch is your &lt;em&gt;desired state&lt;/em&gt;, even if you’ve failed to (yet) reach the &lt;em&gt;desired state&lt;/em&gt; and reality doesn’t match it.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;h2 id=&quot;when-things-go-wrong-dont-go-with-them--elvis-presley&quot;&gt;&quot;When things go wrong, don&apos;t go with them.&quot; – Elvis Presley&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;h3 id=&quot;lets-consider-a-common-failure-state-your-terraform-apply-fails&quot;&gt;Let’s consider a common failure state: &lt;strong&gt;your terraform apply fails.&lt;/strong&gt;&lt;/h3&gt;

&lt;p&gt;I’ve mentioned that in AAM the default branch represents the &lt;em&gt;desired state&lt;/em&gt; – and not necessarily the state of reality. So, a failure to reach the &lt;em&gt;desired state&lt;/em&gt; (executing your plan), results in an ongoing separation between the &lt;em&gt;desired state&lt;/em&gt; and reality. But, that’s pretty much it. At any point, you can re-apply the default branch to again attempt to reach your &lt;em&gt;desired state&lt;/em&gt;. In practice, this results in failed pipelines that are easy to remedy and so pretty much not a big deal.&lt;/p&gt;

&lt;h3 id=&quot;consider-another-scenario-the-git-merge-fails&quot;&gt;Consider another scenario: &lt;strong&gt;the git merge fails&lt;/strong&gt;.&lt;/h3&gt;

&lt;p&gt;The merge itself can fail for few reasons, but the most common is that there are multiple merge requests in flight at a time, resulting in a slow-moving race condition: &lt;em&gt;MR1&lt;/em&gt; is merged while you&apos;re looking at &lt;em&gt;MR2&lt;/em&gt;. In ABM, this can be devistating. If MR1 and MR2 are both approved, and the applies are happening simutaniously, and MR1 is merged, this can – and will – result in MR2 failing to merge because it requires a rebase. You are now in a state where MR2 &lt;em&gt;has been applied&lt;/em&gt;, but not merged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Any other&lt;/strong&gt; MR could now be applied and &lt;em&gt;REVERT&lt;/em&gt; the changes from MR1 that are already applied. Again, this can be devistating to an infrastructure. It&apos;s very easy for resources to be deleted because they&apos;re not defined in MR3&apos;s &lt;em&gt;desired state&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In AAM, any required rebases are found and addressed before the apply – no harm, no foul.&lt;/p&gt;
</description>
        <pubDate>Wed, 14 Jun 2023 00:00:00 -0400</pubDate>
        <lastmod>Wed, 14 Jun 2023 00:00:00 -0400</lastmod>
        <link>https://www.ricky-dev.com/code/2023/06/subtle-differences-between-merge-after-and-before/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/code/2023/06/subtle-differences-between-merge-after-and-before/</guid>
        
        <category>terraform</category>
        
        <category>code</category>
        
        
        <category>code</category>
        
      </item>
    
      <item>
        <title>Complex Data Structures Are An Anti-Pattern</title>
        <description>&lt;p&gt;In Terraform, I&apos;ve found that KISS &amp;gt; DRY and that the use of complex data structures as inputs are a gilded foot gun.&lt;/p&gt;

&lt;!--more--&gt;

&lt;p&gt;I&apos;ve now created a few large Terraform repositories that have managed sprawling infrastructure spanning multiple environments at once and I&apos;ve felt pain.&lt;/p&gt;

&lt;p class=&quot;callout&quot;&gt;Oh! How I&apos;ve hurt myself.&lt;/p&gt;

&lt;p&gt;I learned Terraform &lt;em&gt;as a software engineer&lt;/em&gt; and I applied my opinionated &quot;best practices&quot; to Terraform code – especially DRY. Terraform&apos;s primary means of DRYing your code is using Modules and I thought of Modules like do-everything blueprints. If I needed an AWS VPC, I&apos;d create a &quot;network&quot; module. As my &lt;em&gt;project&lt;/em&gt; needs changed, so would the module; &lt;strong&gt;it would be made to be more flexible&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That single network module would morph. Originally, it would only create a VPC and maybe a couple of subnets. Then public/private subnet pairs in 2 availability zones. Then, the big mistake would happen: I would make the module dynamic.&lt;/p&gt;

&lt;p&gt;My project would pass in subnet configuration as an input .. something like:&lt;/p&gt;

&lt;div class=&quot;language-hcl highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nx&quot;&gt;subnets&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;us_east_1a_public&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;availability_zone&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1a&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;cidr&lt;/span&gt;              &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;10.0.0.0/20&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;us_east_1a_private&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;availability_zone&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1a&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;cidr&lt;/span&gt;              &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;10.0.16.0/20&quot;&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;us_east_1b_public&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;availability_zone&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1b&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;cidr&lt;/span&gt;              &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;10.0.32.0/20&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;us_east_1b_private&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;availability_zone&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1b&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;cidr&lt;/span&gt;              &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;10.0.48.0/20&quot;&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# You know there&apos;s more ... you&apos;re HA, right?&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then, you want to add network routing rules to just your private subnets, but you don&apos;t want to refactor a ton, so each entry in your inputs starts to look like …&lt;/p&gt;
&lt;div class=&quot;language-hcl highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nx&quot;&gt;us_east_1a_public&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;availability_zone&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1a&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;cidr&lt;/span&gt;              &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;10.0.0.0/20&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;routes&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;0.0.0.0/0&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nx&quot;&gt;aws_internet_gateway&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;igw&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;id&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;,&lt;/span&gt;
&lt;span class=&quot;nx&quot;&gt;us_east_1a_private&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;availability_zone&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1a&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;cidr&lt;/span&gt;              &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;10.0.16.0/20&quot;&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;routes&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;s2&quot;&gt;&quot;0.0.0.0/0&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nx&quot;&gt;modules&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;all_my_nat_gateways&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;nat_ids&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;us_east_1a&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You know that&apos;s ugly. It looks right but feels wrong. Eh, it works, right? It&apos;s fine.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;It wasn&apos;t fine.&lt;br /&gt;
-Narrator&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I think most software developers go down this path. They don&apos;t want to repeat themselves and they&apos;re told that static values inside code files is wrong. We do everything we can to avoid typing the word &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;resource&lt;/code&gt; a 2nd time or 3rd time and we sure as hell don&apos;t want to see more than one resource block &lt;em&gt;of the same type&lt;/em&gt; … I mean, can you imagine?!&lt;/p&gt;

&lt;p&gt;You&apos;ve done this, too, dear reader. I&apos;m sure of it. Maybe you&apos;ve noticed why it hurts … maybe not.&lt;/p&gt;

&lt;p&gt;This is what you&apos;ve done:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;You&apos;ve added logic to a framework based on static files.&lt;/li&gt;
  &lt;li&gt;You&apos;ve made your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;resource&lt;/code&gt; block &lt;em&gt;complex&lt;/em&gt; – it has expressions and loops in it. Maybe you had to create &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;locals&lt;/code&gt; to hide the complexity, but that made it worse.&lt;/li&gt;
  &lt;li&gt;You&apos;ve stopped defining resources &lt;em&gt;in Terraform code&lt;/em&gt; and started defining them &lt;em&gt;in your own language.&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;You now have an &lt;em&gt;undocumented&lt;/em&gt; interface.&lt;/li&gt;
  &lt;li&gt;You&apos;ve taken a declarative language and made it imperative.&lt;/li&gt;
  &lt;li&gt;You &lt;em&gt;think&lt;/em&gt; you&apos;ve made something beautiful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p class=&quot;callout&quot;&gt;You&apos;ve really stepped in it this time.&lt;/p&gt;

&lt;p&gt;Terraform is a framework for declaring your infrastructure using a simple, &lt;strong&gt;&lt;em&gt;static&lt;/em&gt;&lt;/strong&gt;, block-based interface. You&apos;ve just stolen all three of its selling points from it. You&apos;ve undone the very essence of Terraform. You monster.&lt;/p&gt;

&lt;p&gt;As your repo of related projects grows, so does your library of widely reused, overly generic, modules. And so do the complexity of each of them as they morph and bend to fit scenarios they were never intended for.&lt;/p&gt;

&lt;p&gt;Here comes my hot takes … my lessons learned. My cheat codes to avoid that pain.&lt;/p&gt;

&lt;p class=&quot;callout&quot;&gt;More Terraform code is better than less.&lt;/p&gt;

&lt;p&gt;Terraform projects and modules should be inflexible, single-purposed, and &lt;em&gt;opinionated&lt;/em&gt; for each scenario. Input variables should be avoided unless their purpose is obvious.&lt;/p&gt;

&lt;p&gt;So what that you have each subnet defined in its own &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;resource&lt;/code&gt; block with the CIDR inline as a static value?&lt;/p&gt;

&lt;div class=&quot;language-hcl highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nx&quot;&gt;resource&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;aws_subnet&quot;&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us_east_1a_public&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;vpc_id&lt;/span&gt;            &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nx&quot;&gt;aws_vpc&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;my_vpc&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;id&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;cidr_block&lt;/span&gt;        &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;10.0.0.0/20&quot;&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;availability_zone&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1a&quot;&lt;/span&gt;

  &lt;span class=&quot;nx&quot;&gt;tags&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;Name&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1a Public&quot;&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;nx&quot;&gt;resource&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;aws_subnet&quot;&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us_east_1a_private&quot;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;vpc_id&lt;/span&gt;            &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nx&quot;&gt;aws_vpc&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;my_vpc&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nx&quot;&gt;id&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;cidr_block&lt;/span&gt;        &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;10.0.16.0/20&quot;&lt;/span&gt;
  &lt;span class=&quot;nx&quot;&gt;availability_zone&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1a&quot;&lt;/span&gt;

  &lt;span class=&quot;nx&quot;&gt;tags&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;nx&quot;&gt;Name&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;us-east-1a Private&quot;&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Nothing. That is readable. It&apos;s reasonable. It&apos;s easy to reason about. People unfamiliar with Terraform can understand it. It&apos;s documented. It&apos;s standardized. It&apos;s beautiful.&lt;/p&gt;
</description>
        <pubDate>Fri, 06 Jan 2023 00:00:00 -0500</pubDate>
        <lastmod>Fri, 06 Jan 2023 00:00:00 -0500</lastmod>
        <link>https://www.ricky-dev.com/code/2023/01/complex-data-structures-are-an-antipatern/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/code/2023/01/complex-data-structures-are-an-antipatern/</guid>
        
        <category>terraform</category>
        
        <category>code</category>
        
        <category>rants</category>
        
        
        <category>code</category>
        
      </item>
    
      <item>
        <title>Everything is on fire and it&apos;s great</title>
        <description>&lt;p&gt;I haven&apos;t blogged about it, but a couple of years ago I went through &lt;a href=&quot;https://www.splunk.com/en_us/newsroom/press-releases/2020/splunk-to-acquire-plumbr-and-rigor-expanding-the-worlds-most-comprehensive-observability-portfolio.html&quot;&gt;an acquisition&lt;/a&gt;. It was a pretty incredible experience and overall pretty positive. I&apos;m sure I&apos;ll blog about that at some point, but today I want to discuss a change that I experienced from going through that … a change &lt;em&gt;in me&lt;/em&gt;. A change I wish happened a lot sooner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I figured out how to not concern myself with most problems.&lt;/strong&gt;
&lt;!--more--&gt;&lt;/p&gt;

&lt;p&gt;… or, at least I think I did. &lt;em&gt;Something&lt;/em&gt; in me changed.&lt;/p&gt;

&lt;p&gt;At Rigor (a 30-something-person startup), I was the do-everything-tech-dude. I was hired to be a C# developer, but switched to Ruby, but switched to infrastructure. I also managed our Google Workspace, Slack Workspace, marketing email delivery, office networking … anything and everything. It didn&apos;t even stop at the border of the company, either. I ended up managing the Slack Workspace and helping with IT and networking for the startup village where we had our office.&lt;/p&gt;

&lt;p&gt;It seems that at some point I acquired an insatiable appetite for responsibility… or control? Or maybe I just saw problems I could solve all around me and there was nobody around to keep my focus on our product. Whatever the reason, it worked out really well … except …&lt;/p&gt;

&lt;p class=&quot;callout&quot;&gt;That shit was toxic.&lt;/p&gt;

&lt;p&gt;Over the four years at Rigor, I felt an insane amount of stress. Nothing that I had ever felt before. I&apos;m a rescuer by nature. I run into high-stress situations, but this was different. This wasn&apos;t a high-stakes infrastructure outage … this was a frog boil. I didn&apos;t know it, but I was living in the stew all the while figuring out how to better cut the carrots.&lt;/p&gt;

&lt;p&gt;I ended up having anxiety attacks. Once so bad I drove myself to the hospital because I thought I was having a heart attack. &lt;em&gt;Read that again, dear reader.&lt;/em&gt; I drove. Myself.&lt;/p&gt;

&lt;p&gt;Once acquired, something strange happened. I didn&apos;t have to worry anymore. We had money. We weren&apos;t going to go out of business. Our 30-something employees weren&apos;t going to go without their already-under-market salary. &lt;strong&gt;Suddenly, everything was fine&lt;/strong&gt;. I felt a weight lift from me that I could never fully express. Yeah, there was stress in the new company, but the hot water was comfortable relative to the inferno that I had convinced myself was normal.&lt;/p&gt;

&lt;p&gt;It&apos;s now been two years since the acquisition. I&apos;ve moved to a higher-level team that has a ton of responsibilities. And I feel nearly no stress. I&apos;m sitting here during our winter break (Splunk employees are off from Christmas to New Year… &lt;em&gt;amazing&lt;/em&gt;) and it struck me that I hadn&apos;t thought about work &lt;em&gt;at all&lt;/em&gt; in a week. I&apos;m not worried about what my co-workers are doing. I don&apos;t bother them with what I&apos;m doing. I can&apos;t concern myself with all of that.&lt;/p&gt;

&lt;p&gt;There have been &lt;a href=&quot;https://www.splunk.com/en_us/newsroom/press-releases/2021/splunk-announces-ceo-transition.html&quot;&gt;some&lt;/a&gt; &lt;a href=&quot;https://www.splunk.com/en_us/newsroom/press-releases/2022/splunk-announces-cfo-transition.html&quot;&gt;organizational&lt;/a&gt; shifts since then … and I see other people being worried about it … but I just can&apos;t be bothered to even think about it. Those kinds of changes would be earth-shattering to me at any other point in my career. They would signal the end of whatever organization I was in. A sure death knell.&lt;/p&gt;

&lt;p class=&quot;callout&quot;&gt;But, something changed &lt;em&gt;in me&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Everything is a mess and I&apos;m fine. There&apos;s messy business all around and I don&apos;t care. This is a dumpster fire and it&apos;s entirely normal. I&apos;ve said these zen-like phrases before. It finally sank in.&lt;/p&gt;

&lt;p&gt;Sure, my team is crazy understaffed (we should be 40 people and we&apos;re a team of 9) and other teams love to blame us for their own shortcomings … but this is great. We&apos;re not going anywhere. We&apos;re always improving. They treat me really well. Nothing else seems to matter to me anymore – I no longer &lt;em&gt;need&lt;/em&gt; the control I can&apos;t have. I don&apos;t have to prove anything to anybody (I mean it this time). I have little to fear.&lt;/p&gt;

&lt;p&gt;Now I&apos;m stuck figuring out why. Why am I so zen-like? Have I gained the confidence I always faked? Have I just figured out how to stay out of business that isn&apos;t my own? Or, am I getting complacent?&lt;/p&gt;
</description>
        <pubDate>Thu, 29 Dec 2022 00:00:00 -0500</pubDate>
        <lastmod>Thu, 29 Dec 2022 00:00:00 -0500</lastmod>
        <link>https://www.ricky-dev.com/rants/2022/12/everything-is-on-fire/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/rants/2022/12/everything-is-on-fire/</guid>
        
        <category>work life balance</category>
        
        <category>personal growth</category>
        
        
        <category>rants</category>
        
      </item>
    
      <item>
        <title>Monitoring your home internet connection because boredom</title>
        <description>&lt;p&gt;Early in the COVID-19 lockdown extravaganza 🎉 my wife and I purchased a new home and moved in. It was a stressful time, filled with uncertainty. In the aftermath, I found myself with pretty terrible internet service and since we are both working in isolation, I am hellbent on finding out why …&lt;/p&gt;

&lt;p&gt;So I&apos;ll use the tools I know – Telegraf, InfluxDB, Grafana, and Docker – to collect data, analyze it, and still probably not figure it out … but it&apos;s something to do.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.ricky-dev.com/images/cable-modem-snr.png&quot; alt=&quot;Cable modem signal-to-noise chart&quot; /&gt;&lt;/p&gt;

&lt;!--more--&gt;

&lt;p&gt;This post is not aimed to be a step-by-step guide, as much as it is a mental dump. I aim to give you ideas and resources so if you want to replicate this kind of thing yourself you can do your own research to figure it out on your own.&lt;/p&gt;

&lt;h2 id=&quot;overview&quot;&gt;Overview&lt;/h2&gt;
&lt;p&gt;My main goals were to meaure the quality of my internet connectivity and provide a dashboard or two so I can see short term and long term changes and trends. I set out to do this in two main ways: perform regular tests and pull spectrum analysis data directly from my cable modem.&lt;/p&gt;

&lt;p&gt;Performing tests is really common and what you&apos;d expect: send pings and perform http requests. Pulling data out of a cable modem is something I&apos;ve never seen before and I wasn&apos;t sure if it was even possible (it is).&lt;/p&gt;

&lt;p&gt;I ended up using a raspberry pi for data collection since it&apos;s always online (it&apos;s also my &lt;a href=&quot;https://pi-hole.net/&quot;&gt;pi-hole&lt;/a&gt;) and using my Windows PC for storage and reporting. I may move things to a 2nd Raspberry Pi, but since I wanted to store a ton of data, I opted to keep the database local until I can build a better storage system.&lt;/p&gt;

&lt;p&gt;Raspberry PI:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;#telegraf&quot;&gt;Telegraf&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;A &lt;a href=&quot;#ruby-script&quot;&gt;Ruby Script&lt;/a&gt; to pull data from my modem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Windows 10 PC:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;#docker&quot;&gt;Docker&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#influxdb&quot;&gt;InfluxDB&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#grafana&quot;&gt;Grafana&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here&apos;s what it ended up looking like:
&lt;a href=&quot;https://www.ricky-dev.com/images/internet-monitor-1-large.png&quot;&gt;&lt;img src=&quot;/images/internet-monitor-1-medium.png&quot; alt=&quot;Internet uptime dashboard with a bunch of charts -- it&apos;s beautiful&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.ricky-dev.com/images/cable-modem-dashboard-large.png&quot;&gt;&lt;img src=&quot;/images/cable-modem-dashboard-medium.png&quot; alt=&quot;Cable modem dashboard. It&apos;s like a rainbow.&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;docker&quot;&gt;Docker&lt;/h2&gt;

&lt;p&gt;I&apos;m not going to cover setting up Docker. It&apos;s been done and you can find some really great articles about how to go about it. I&apos;ll only tell you about my environment.&lt;/p&gt;

&lt;p&gt;I&apos;m hosting this on my Windows 10 PC (I am a gamer, after all) with Docker Desktop. However, I do most of my command line work inside of Ubuntu running on the &lt;a href=&quot;https://docs.microsoft.com/en-us/windows/wsl/&quot;&gt;Windows Subsystem for Linux&lt;/a&gt; (bonus shoutout to the &lt;a href=&quot;https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab&quot;&gt;Windows Terminal Preview&lt;/a&gt; for finally making this not awful).&lt;/p&gt;

&lt;h2 id=&quot;influxdb&quot;&gt;InfluxDB&lt;/h2&gt;

&lt;p&gt;InfluxDB is my main squeeze for timeseries data. We use it a &lt;em&gt;ton&lt;/em&gt; at Rigor for both customer facing data, as well as all of our internal operational metrics storage and reporting.&lt;/p&gt;

&lt;p&gt;I&apos;m running this inside of docker, with the following command:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;docker run &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; influxdb &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;INFLUXDB_ADMIN_PASSWORD=V&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$5gQ7&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;*dX^z^5EvT8v7N01IXbom*XJBx^f&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;INFLUXDB_ADMIN_USER&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;admin &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--restart&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;always &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 8086:8086 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;E:&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\D&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;ocker&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\i&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;nfluxdb-data:/var/lib/influxdb&quot;&lt;/span&gt;
  influxdb
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This does the following:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Sets environment variables which will instruct the service to create a first user (no, that&apos;s not my real password)&lt;/li&gt;
  &lt;li&gt;Instructs Docker to start the container on boot and always restart it if it fails&lt;/li&gt;
  &lt;li&gt;Exposes the InfluxDB API port publicly (port 8086)&lt;/li&gt;
  &lt;li&gt;Mounts a folder as a volume so we persist the delicious data no matter what&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Important note: since I&apos;m mounting a volume, I &lt;em&gt;must&lt;/em&gt; run this in a Windows shell and not inside Ubuntu&lt;/p&gt;

&lt;p&gt;You&apos;ll likely want to also start a &lt;a href=&quot;https://hub.docker.com/_/chronograf&quot;&gt;Chronograf&lt;/a&gt; container temporarily, so you can manage your InfluxDB server and set it up however you&apos;d like. Perhaps you want a database &lt;em&gt;just&lt;/em&gt; for this data? Perhaps you want to create a write-only use for Telegraf? Chronograf will make it easier.&lt;/p&gt;

&lt;h2 id=&quot;telegraf&quot;&gt;Telegraf&lt;/h2&gt;

&lt;p&gt;The &apos;T&apos; in the TICK stack – Telegraf is a massively powerful data collection application. It&apos;s easy to overlook how wonderfully useful this application is because of how simplistic it appears. I&apos;ll configure it to perform all my tests and run my own script to pull data from the cable modem, batch it up, and ship it to influx on a fast schedule.&lt;/p&gt;

&lt;p&gt;Keep in mind that Telegraf only keeps data in memory. This has two important implications when it cannot send data to its outputs:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;it will continue to consume memory to hold data until it OOMs&lt;/li&gt;
  &lt;li&gt;it will loose all data it hasn&apos;t shipped if it crashes
These, to me, are Telegraf&apos;s only real drawbacks. They aren&apos;t deal breakers for most applications, but they&apos;re important to note since it&apos;s possible you can drop data in some scenarios.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Again, I&apos;m not going to detail how to install Telegraf – let Google be your guide there, but once you have it installed it should be as simple as editing the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/telegraf/telegraf.conf&lt;/code&gt; file and restarting the telegraf service (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo service telegraf restart&lt;/code&gt; on Ubuntu).&lt;/p&gt;

&lt;p&gt;Here&apos;s an sample of my &lt;a href=&quot;https://www.ricky-dev.com/assets/2020-07-08-home-internet-monitoring/telegraf.conf&quot;&gt;telegraf config&lt;/a&gt; file. Notice that it has a single output – Influxdb – and a number of inputs – ping, http_response, and my own ruby script.&lt;/p&gt;

&lt;h2 id=&quot;ruby-script&quot;&gt;Ruby Script&lt;/h2&gt;

&lt;p&gt;Many cable modems have a web interface just like home routers – most people don&apos;t know about it, but there it is. Most of the time, you can reach it at http://192.168.100.1 and the login is either on a sticker on your cable modem, or it&apos;s something incredibly common. For my Netgear CM1200, it&apos;s &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;admin&lt;/code&gt;/&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;password&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once you login, you should be able to get some pretty detailed measurements about your connection. This is an actual spectrum analysis that the cable modem performs and shows you a single snapshot.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://www.ricky-dev.com/images/cable-modem-interface-medium.png&quot; alt=&quot;Cable modem web interface&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I looked into using &quot;real&quot; monitoring protocols, like SNMP, to pull this data but my cable modem doesn&apos;t support it and it&apos;s unlikely yours will, either. Most consumer level network devices don&apos;t expose these kinds of protocols – any why would they, anyway?&lt;/p&gt;

&lt;p&gt;So, I had to take a dirtier approach: web scraping. You hate it. I hate it. Let&apos;s get it over with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;➡ &lt;a href=&quot;https://www.ricky-dev.com/assets/2020-07-08-home-internet-monitoring/modem_status.rb&quot;&gt;Look at my fabulous Ruby script here!&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This script will connect to the web interface, grab the HTML for the page pictured above, and pull data out using regular expressions. (Un-)Luckily, the web interface doesn&apos;t generate html tables server side. Instead, it writes out all the data to a Javascript variable in a pipe-delimited format. This makes for easier regular expression patterns and since the HTML output is very stable (at least until a firmware update changes the interface), I can just grab the exact line number holding the variable I want. Ugly, for sure, but #ShipIt 🚀.&lt;/p&gt;

&lt;h2 id=&quot;grafana&quot;&gt;Grafana&lt;/h2&gt;

&lt;p&gt;Finally, the last step – creating a dashboard to dazzle your friends and blog readers.&lt;/p&gt;

&lt;p&gt;Yet, again. I won&apos;t spend any real time getting Grafana running. I just use Docker:&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;docker run --name grafana -p 3000:3000 grafana
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once grafana is running, I configured an InfluxDB datasource, and created my dashboards.&lt;/p&gt;

&lt;p&gt;Here&apos;s the JSON export of my dashboards – you&apos;ll need to tweak it if you&apos;re trying to recreate this, for sure.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.ricky-dev.com/assets/2020-07-08-home-internet-monitoring/grafana-dashboard-cable-modem.json&quot;&gt;Cable Modem&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.ricky-dev.com/assets/2020-07-08-home-internet-monitoring/grafana-dashboard-internet-monitor.json&quot;&gt;Internet Monitor&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;what-did-i-learn&quot;&gt;What did I learn?&lt;/h2&gt;

&lt;p&gt;Once this whole thing was up and running, I immediately found that I was having pretty serious packet loss.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.ricky-dev.com/images/packet-loss-chart.png&quot;&gt;&lt;img src=&quot;https://www.ricky-dev.com/images/packet-loss-chart.png&quot; alt=&quot;Packet loss chart&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also found that there are a number of frequencies on which my cable modem is unable to obtain solid locks and others that vary greatly through the day.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.ricky-dev.com/images/cable-modem-power-large.png&quot;&gt;&lt;img src=&quot;https://www.ricky-dev.com/images/cable-modem-power-medium.png&quot; alt=&quot;Cable modem downstream power levels chart&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

</description>
        <pubDate>Wed, 08 Jul 2020 00:00:00 -0400</pubDate>
        <lastmod>Wed, 08 Jul 2020 00:00:00 -0400</lastmod>
        <link>https://www.ricky-dev.com/code/2020/07/home-connection-monitoring/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/code/2020/07/home-connection-monitoring/</guid>
        
        <category>docker</category>
        
        <category>influxdb</category>
        
        <category>telegraf</category>
        
        <category>comcast</category>
        
        
        <category>code</category>
        
      </item>
    
      <item>
        <title>Setting us up the bomb</title>
        <description>&lt;p&gt;I&apos;ve always known that scripts were out there and scouring the internet for exploitable websites. It&apos;s why the more widely used blogging applications (I&apos;m looking at you, Wordpress) are so widely hacked.&lt;/p&gt;

&lt;p&gt;To make my own little positive impact on the world (and because it&apos;s fun), I&apos;ve decided to set up a few traps on my site.
&lt;!--more--&gt;&lt;/p&gt;

&lt;p&gt;I don&apos;t mind telling you about the traps, since it&apos;s unlikely that anybody trying to hack my site would actually read the blog. If they did, they&apos;d know that my site is completely static, hosted on AWS S3, and served by AWS CloudFront. I won&apos;t say that my site is unhackable, but I will say that it would be difficult and there would be little to gain. There&apos;s no database, no credentials, and no secrets. Also, recovering from a hacked site would take me less than a minute.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/bomb.png&quot; class=&quot;float-right&quot; alt=&quot;A random image that is of no importance to the blog post.&quot; /&gt;
So, on to the trap– &lt;strong&gt;the gzip bomb&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Gzip, a variant of the zip file you know and love, is very good at compressing repeating data. What if I could take a bunch of data, compress it, and somehow convince script kiddies to decompress the data causing their script to slow down, fill their hard drive, and maybe even crash some things? In fact, it&apos;s not terribly difficult.&lt;/p&gt;

&lt;p&gt;As part of optimizing your website, you ought to compress your text content, using gzip, and instruct your visitor&apos;s browser to decompress the content prior to displaying it. This makes sites load much faster, since they&apos;re downloading less data, and it&apos;s not very difficult to do– most web servers handle this automatically or with easy configuration. The browser should download a gzip file and the HTTP response should have a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Content-Encoding&lt;/code&gt; header of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gzip&lt;/code&gt;. That header is the instruction to the browser to decomress the content.&lt;/p&gt;

&lt;p&gt;So, to set up the bomb, I&apos;ll create a gzip file containing a bunch of garbage data–repeating zeros–, name it something an exploit script will absolutely look for, and upload it to my hosting.&lt;/p&gt;

&lt;p&gt;After taking a look at my CloudFront usage report, I see that I often get scanned and the most common thing people are looking for is the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/wp-login.php&lt;/code&gt; file. This is the file that processes Wordpress admin logins, so it makes sense that it would be top of my list.&lt;/p&gt;

&lt;p&gt;So, with a handy flick of the wrist:&lt;/p&gt;
&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;dd &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/dev/zero &lt;span class=&quot;nv&quot;&gt;bs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;1024 &lt;span class=&quot;nv&quot;&gt;count&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;10240000 | &lt;span class=&quot;nb&quot;&gt;gzip&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; wp-login.php
aws s3 &lt;span class=&quot;nb&quot;&gt;cp &lt;/span&gt;wp-login.php s3://my_bucket/ &lt;span class=&quot;nt&quot;&gt;--content-type&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;text/html&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--content-encoding&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;gzip&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;I have created a 10MB file, called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;wp-login.php&lt;/code&gt; that extracts to &lt;strong&gt;10GB&lt;/strong&gt; of nothing. It only takes about a minute to create the file. It does not extract a &lt;em&gt;file&lt;/em&gt; that is 10GB, nor is it repeating zero characters– it&apos;s binary zeros … &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;null&lt;/code&gt; characters. Trying to extract the data will result in simple nonsense, but the attempt will still be made and that&apos;s the important part.&lt;/p&gt;

&lt;p&gt;It exists now, on my site. I won&apos;t link to it because it&apos;s possible that if you clicked that link it would crash your browser (although it seems that Chrome handles it pretty well, albeit slowly).&lt;/p&gt;

&lt;p&gt;Am I sure this will inflict pain upon the lower levels of the internet hacker community? No, but it costs me nothing to do and I am an optimist at heart.&lt;/p&gt;

&lt;h2 id=&quot;learning&quot;&gt;Learning&lt;/h2&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/HTTP_compression&quot;&gt;HTTP Compression&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding&quot;&gt;Content Encoding Header&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki//dev/zero&quot;&gt;/dev/zero&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.unix.com/man-page/linux/1/dd/&quot;&gt;man dd&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.unix.com/man-page/Linux/1/gzip/&quot;&gt;man gzip&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        <pubDate>Tue, 25 Jul 2017 00:00:00 -0400</pubDate>
        <lastmod>Tue, 25 Jul 2017 00:00:00 -0400</lastmod>
        <link>https://www.ricky-dev.com/code/2017/07/setting-up-the-bomb/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/code/2017/07/setting-up-the-bomb/</guid>
        
        <category>Development</category>
        
        <category>Hacking</category>
        
        
        <category>code</category>
        
      </item>
    
      <item>
        <title>Automating Jekyll deployment to S3 using CircleCI</title>
        <description>&lt;p&gt;I&apos;ve been working a great deal with Jekyll lately. I&apos;ve used it for this blog for a while, but I hadn&apos;t spent much time automating the whole process. Unfortunately, when I set out to hook everything up, there didn&apos;t seem to be anybody out there doing exactly what I wanted, so it took me a little more time than I expected.&lt;/p&gt;

&lt;p&gt;I&apos;ll guide you through the configuration steps I&apos;ve taken to get my site automatically deploying to S3 (hosted by CloudFront) using CircleCI whenever you push to your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;master&lt;/code&gt; branch.&lt;/p&gt;

&lt;!--more--&gt;

&lt;h2 id=&quot;overview-of-the-set-up&quot;&gt;Overview of the Set Up&lt;/h2&gt;
&lt;p&gt;When this is all configured, we will be automatically deploying your Jekyll website to an S3 bucket every time you push code to your repository&apos;s master branch.&lt;/p&gt;

&lt;p&gt;It assumes you already have your site hosted using S3 (and, optionally, Cloudfront).&lt;/p&gt;

&lt;p&gt;We will be using the &lt;a href=&quot;https://github.com/laurilehmijoki/s3_website&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;s3_website&lt;/code&gt;&lt;/a&gt; gem to push code to S3.&lt;/p&gt;

&lt;p&gt;I find this a great set up because it will probably set our caching headers on our S3 objects and Gzips them, but also perform CloudFront invalidations on each build. Not to mention it takes out the grunt work of manually deploying my website.&lt;/p&gt;

&lt;h2 id=&quot;set-up-circleci&quot;&gt;Set Up CircleCI&lt;/h2&gt;

&lt;p&gt;First, a few steps that I shouldn&apos;t need to go into much detail:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;Create an account on &lt;a href=&quot;http://circleci.com&quot;&gt;CircleCI&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://circleci.com/add-projects&quot;&gt;Add your project&lt;/a&gt;
  Point the new project to your website/blog git repository. &lt;em&gt;This will kick off an initial build that will fail.&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;Add AWS permissions
    &lt;ol&gt;
      &lt;li&gt;From the &lt;a href=&quot;https://circleci.com/dashboard&quot;&gt;dashboard&lt;/a&gt;, select the cog next to your project.&lt;/li&gt;
      &lt;li&gt;Go to &quot;AWS Permissions&quot; in the sidebar.&lt;/li&gt;
      &lt;li&gt;Enter AWS Access Keys that have &lt;a href=&quot;https://github.com/laurilehmijoki/s3_website/tree/master/additional-docs/setting-up-aws-credentials.md&quot;&gt;adequate permissions&lt;/a&gt;.&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;configure-the-build&quot;&gt;Configure The Build&lt;/h2&gt;
&lt;p&gt;First, you&apos;ll want to add &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gem s3_website&lt;/code&gt; to your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Gemfile&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$ bundle install&lt;/code&gt;. This ruby gem will provide us with an executable that knows how to deploy our Jekyll build to S3 and can even configure our S3 bucket and CloudFront for proper hosting (mostly).&lt;/p&gt;

&lt;h3 id=&quot;circleyml&quot;&gt;Circle.yml&lt;/h3&gt;

&lt;p&gt;CircleCI uses a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;circle.yml&lt;/code&gt; file, placed in the root of your project, for configuration. Here is mine:&lt;/p&gt;

&lt;noscript&gt;&lt;pre&gt;machine:
  environment:
    JEKYLL_ENV: production
    RUBY_ENV: production
  ruby:
    version: 2.4.0
dependencies:
  pre:
    - gem install jekyll s3_website jekyll-sitemap jekyll-paginate jekyll-gist
  override:
    - bundle install: # note the colon here
        timeout: 240 # note the double indentation (four spaces) here
  cache_directories:
    - &amp;quot;/opt/circleci/.rvm/gems&amp;quot;
compile:
  override:
    - bundle exec jekyll b -d $CIRCLE_ARTIFACTS
test:
  override:
    - echo &amp;quot;Oh yeah!&amp;quot; # No way to test Jekyll. Just do something to satisfy CircleCI.
deployment:
  # Only deploy master
  production:
    branch: master
    commands:
      - s3_website push
&lt;/pre&gt;&lt;/noscript&gt;
&lt;script src=&quot;https://gist.github.com/DigitallyBorn/46733fcb941a40d50f81c0b8d2af9c3a.js?file=circle.yml&quot;&gt; &lt;/script&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bundle exec jekyll b -d $CIRCLE_ARTIFACTS&lt;/code&gt; - This will make our jekyll output, usually saved to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;./_site&lt;/code&gt;, save to a special folder on the CircleCI container that will be saved later and allows you to see the output of each build through their web interface. This is mostly a luxury, but can make debugging builds later easier.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;test&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;override&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;echo &quot;Oh yeah!&quot;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# No way to test Jekyll. Just do something to satisfy CircleCI.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Remember that initial build that failed? This will fix it. CircleCI is built on the premise that builds are tested. The initial build failed because there were no tests executed. A build without tests is a failure (generally that&apos;s a great policy). As far as I know, Jekyll has no mechanism to test the site that&apos;s generated, so this configuration option will simply execute a bash command that outputs some text. Since the command executes successfully, CircleCI sees it as a success and will move on to deploy after build.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt; &lt;span class=&quot;na&quot;&gt;deployment&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;production&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;branch&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;master&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;commands&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;s3_website push&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This block instructs CircleCI to invoke &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;s3_website push&lt;/code&gt; for any changes to the master branch.&lt;/p&gt;

&lt;h3 id=&quot;s3_websiteyml&quot;&gt;s3_website.yml&lt;/h3&gt;
&lt;p&gt;This configuration file will give &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;s3_website&lt;/code&gt; the information it needs to deploy to s3 and configure/manipulate your cloudfront distribution.&lt;/p&gt;
&lt;noscript&gt;&lt;pre&gt;s3_bucket: ricky-dev.com

# Below are examples of all the available configurations.
# See README for more detailed info on each of them.

site: &amp;lt;%= ENV[&amp;#39;CIRCLE_ARTIFACTS&amp;#39;] %&amp;gt;

# index_document: index.html
# error_document: error.html

# Set some good and long optimization values 
cache_control:
  &amp;quot;images/*&amp;quot;: &amp;#39;public, max-age=604800, s-max-age=86400&amp;#39;
  &amp;quot;css/*&amp;quot;: &amp;#39;public, max-age=604800, s-max-age=86400&amp;#39;
  &amp;quot;*&amp;quot;: &amp;#39;public, max-age=604800, s-max-age=86400&amp;#39;

gzip:
  - .html
  - .css
  - .md
  - .cs
  - .xml
  - .json
# gzip_zopfli: true

# See http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region for valid endpoints
# s3_endpoint: ap-northeast-1

# ignore_on_server: that_folder_of_stuff_i_dont_keep_locally

# Most of these shouldn&amp;#39;t be included in the build, but I wouldn&amp;#39;t want to be wrong and publish this
exclude_from_upload:
  - Gemfile
  - _config.yml
  - s3_website.yml
  - circle.yml

# s3_reduced_redundancy: true


cloudfront_distribution_id: YOUR_DISTRIBUTION

cloudfront_distribution_config:
  # default_cache_behavior:
  #   min_TTL: &amp;lt;%= 60 * 60 * 24 %&amp;gt;
  aliases:
    quantity: 1
    items:
      - ricky-dev.com
      - www.ricky-dev.com

# cloudfront_invalidate_root: true

cloudfront_wildcard_invalidation: true

# concurrency_level: 5

# redirects:
#   index.php: /
#   about.php: about.html
#   music-files/promo.mp4: http://www.youtube.com/watch?v=dQw4w9WgXcQ

# routing_rules:
#   - condition:
#       key_prefix_equals: blog/some_path
#     redirect:
#       host_name: blog.example.com
#       replace_key_prefix_with: some_new_path/
#       http_redirect_code: 301
&lt;/pre&gt;&lt;/noscript&gt;
&lt;script src=&quot;https://gist.github.com/DigitallyBorn/46733fcb941a40d50f81c0b8d2af9c3a.js?file=s3_website.yml&quot;&gt; &lt;/script&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cloudfront_distribution_id&lt;/code&gt; - Be sure to set this value, if you&apos;re using CloudFront or comment it out, otherwise.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;site: &amp;lt;%= ENV[&apos;CIRCLE_ARTIFACTS&apos;] %&amp;gt;&lt;/code&gt; - This tells &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;s3_website&lt;/code&gt; to look for the site in the CircleCI artifacts folder and not the default &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;_site&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gzip:&lt;/code&gt; - I&apos;ve included a few very common extensions for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;s3_website&lt;/code&gt; to gzip before uploading. Depending on your website content, you should customize this list.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;exclude_from_upload:&lt;/code&gt; - In the next section, I&apos;ll instruct Jekyll not to include our config files in the site build, so this block &lt;em&gt;should&lt;/em&gt; be redundant, but I left it in place to make sure nothing gets uploaded.&lt;/p&gt;

&lt;h3 id=&quot;_configyml&quot;&gt;_config.yml&lt;/h3&gt;
&lt;p&gt;One last step is to make sure that our brand new config files don&apos;t get deployed to our public hosting. While there&apos;s nothing special in these configs, there&apos;s no reason to publish them and generally giving any information to would-be bad guys should be avoided.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT:&lt;/strong&gt; If you decide to skip this step, you will still need to exclude the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vendor&lt;/code&gt; folder. The build will fail with it because of a pre-build step CircleCI performs.&lt;/p&gt;

&lt;p&gt;Just add/merge this block into your Jekyll config:&lt;/p&gt;
&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;exclude&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;vendor&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Gemfile&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Gemfile.lock&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;s3_website.yml&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;circle.yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
</description>
        <pubDate>Mon, 30 Jan 2017 00:00:00 -0500</pubDate>
        <lastmod>Mon, 30 Jan 2017 00:00:00 -0500</lastmod>
        <link>https://www.ricky-dev.com/code/2017/01/automating-jekyll-deployment-to-s3-using-circleci/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/code/2017/01/automating-jekyll-deployment-to-s3-using-circleci/</guid>
        
        <category>jekyll</category>
        
        <category>circleci</category>
        
        <category>devops</category>
        
        <category>aws</category>
        
        <category>s3</category>
        
        <category>cloudfront</category>
        
        
        <category>code</category>
        
      </item>
    
      <item>
        <title>Be careful what you ask for; a lesson in monitoring</title>
        <description>&lt;p&gt;This morning was like any other—wake up late, shower, get stuck in traffic—but with a twist. As soon as I hit traffic, which was really bad even by I-85-in-midtown-Atlanta standards, my smart watch started to buzz every few seconds. &lt;a href=&quot;http://www.trioapp.co/&quot;&gt;Trio&apos;s&lt;/a&gt; servers are unreachable according to our monitoring service. &lt;em&gt;It&apos;s going to be one of those mornings.&lt;/em&gt;
&lt;!--more--&gt;
When our monitoring service tells me something is wrong, I have a mental checklist, and I began working it (more-or-less in order).&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Check &lt;a href=&quot;http://www.newrelic.com&quot;&gt;NewRelic&lt;/a&gt; for performance issues.&lt;/li&gt;
  &lt;li&gt;Check &lt;a href=&quot;http://www.heroku.com&quot;&gt;Heroku&apos;s&lt;/a&gt; status page for outages.&lt;/li&gt;
  &lt;li&gt;Check &lt;a href=&quot;http://aws.amazon.com&quot;&gt;AWS&apos;&lt;/a&gt; status page.&lt;/li&gt;
  &lt;li&gt;Submit support ticket to Heroku, if applicable.&lt;/li&gt;
  &lt;li&gt;Look into any stop-gaps I can put in place to lessen the impact.&lt;/li&gt;
  &lt;li&gt;Inform users of potential issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was drafting a user notice to pop-up in-app when something I saw hit me: NewRelic also has monitoring turn on and those alarms weren&apos;t going off; &lt;em&gt;why not&lt;/em&gt;?&lt;/p&gt;

&lt;h3 id=&quot;eureka&quot;&gt;Eureka!&lt;/h3&gt;

&lt;p&gt;There is a fundamental difference in how NewRelic and Pingdom perform their monitoring—Pingdom follows redirects and tests the final destination whereas NewRelic (by default) accepts a redirect &lt;em&gt;as a success&lt;/em&gt;. This little detail is important because a while back, we decided to throw away our single-page, teaser, homepage and simply redirect users to our iTunes store page. So, since we made that change our monitoring service hasn&apos;t actually been monitoring our website, but we&apos;ve been &lt;strong&gt;monitoring the iTunes&lt;/strong&gt; store. Oops.&lt;/p&gt;

&lt;p&gt;With a quick fix and a flick of the wrist, Pingdom is now fetching our &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;robots.txt&lt;/code&gt; file. This has two benefits: it&apos;s actually monitoring that our server is alive and accessible and it&apos;s fetching a static page, so there is very little overhead on the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; When you&apos;re setting up HTTP monitoring, create a static file and monitor that file&apos;s url.&lt;/p&gt;
</description>
        <pubDate>Tue, 21 Jul 2015 00:00:00 -0400</pubDate>
        <lastmod>Tue, 21 Jul 2015 00:00:00 -0400</lastmod>
        <link>https://www.ricky-dev.com/code/2015/07/a-lesson-in-monitoring/</link>
        <guid isPermaLink="true">https://www.ricky-dev.com/code/2015/07/a-lesson-in-monitoring/</guid>
        
        <category>monitoring</category>
        
        <category>newrelic</category>
        
        <category>heroku</category>
        
        <category>aws</category>
        
        
        <category>code</category>
        
      </item>
    
  </channel>
</rss>
