Guides

Automatic NixOS Upgrades with Forgejo Actions

Keep NixOS servers and desktops up-to-date automatically — CI updates flake.lock, hosts self-upgrade daily, and you review a diff before anything deploys.

15 min read NixOS Forgejo CI/CD Flakes

The problem: update fatigue

Every NixOS machine you manage needs its flake inputs updated, its configuration rebuilt, and the new generation activated. Do it manually and you either fall behind on security patches or spend your weekends SSH-ing into servers. Script it naively and you ship untested updates straight to production.

The ideal workflow:

  1. CI updates flake.lock on a schedule and shows you exactly what changed.
  2. You review and merge a pull request with per-host package diffs.
  3. Hosts self-upgrade from the merged commit — no manual intervention, no surprises.

This guide builds exactly that with a Forgejo Actions workflow and a small NixOS module.

Architecture

The system has two independent timers, staggered three hours apart:

ComponentRuns atWhat it does
Forgejo Actions workflow02:00 UTCRuns the diff script, updates flake.lock, opens a PR with per-host diffs
NixOS auto-upgrade timer05:00 (host-local)Fetches main, builds own configuration, runs nixos-rebuild switch
Interaction flow
sequenceDiagram participant CI as Forgejo CI participant Repo as 🗂 Git Repo participant Host as NixOS Host participant You rect rgba(126, 186, 228, 0.08) Note over CI,Repo: Daily — 02:00 UTC CI->>CI: scripts/flake-update-diff.sh CI->>CI: Build all hosts before & after CI->>CI: nvd diff per host CI->>Repo: Open PR with diff report end rect rgba(126, 186, 228, 0.04) Note over Repo,You: Whenever ready (hours, days, …) You->>Repo: Review diffs & merge PR end rect rgba(126, 186, 228, 0.08) Note over Host,Repo: Daily — 05:00 host-local Host->>Repo: Fetch main Host->>Host: nix build (--no-update-lock-file) Host->>Host: nixos-rebuild switch Note over Host,Repo: No-op if main is unchanged end
Branch lifecycle
gitGraph commit id: " " commit id: " " branch auto/flake-update checkout auto/flake-update commit id: "chore: update flake inputs" checkout main merge auto/flake-update id: "merge PR" commit id: "hosts rebuild" type: HIGHLIGHT commit id: " " commit id: " "

Hosts never modify flake.lock themselves. They always use --no-update-lock-file and build whatever version of nixpkgs (and other inputs) the CI committed to main. This keeps every machine on the same, reviewed set of inputs.

Step 1: The diff script

The core logic lives in a standalone shell script that can run both locally and in CI. It builds every NixOS host configuration before and after a flake update, then produces a per-host package diff via nvd.

Create scripts/flake-update-diff.sh in your NixOS flake repository:

bash
#!/usr/bin/env bash
#
# Build all NixOS host configurations before and after a flake update,
# then report per-host package diffs via nvd.
#
# Exit codes:
#   0 — at least one host has closure changes (diff report on stdout)
#   1 — unexpected error
#   2 — flake.lock did not change after update
#   3 — flake.lock changed but no host closures differ
#
# Options:
#   --skip-update   Skip 'nix flake update' (useful when flake.lock is
#                   already updated and you just want to diff)
#
set -euo pipefail

SKIP_UPDATE=false
for arg in "$@"; do
  case "$arg" in
    --skip-update) SKIP_UPDATE=true ;;
    *) echo "Unknown option: $arg" >&2; exit 1 ;;
  esac
done

# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
log()   { echo "  $*" >&2; }
ok()    { echo "  ✓ $*" >&2; }
fail()  { echo "  ✗ $*" >&2; }
# In CI: use ::group::/::endgroup:: for collapsible sections.
# Locally: print readable section headers instead.
if [ -n "${CI:-}" ]; then
  step()     { echo "::group::$1" >&2; }
  endstep()  { echo "::endgroup::" >&2; }
else
  step()     { echo >&2; echo "── $1 ──" >&2; }
  endstep()  { :; }
fi

# ---------------------------------------------------------------------------
# 1. Discover full (non-minimal) host configurations
# ---------------------------------------------------------------------------
step "Discovering hosts"
HOSTS=$(nix eval .#nixosConfigurations \
  --apply 'cs: builtins.filter (n: builtins.match ".*-minimal" n == null) (builtins.attrNames cs)' \
  --json | nix run nixpkgs#jq -- -r '.[]')

if [ -z "$HOSTS" ]; then
  fail "No host configurations found."
  exit 1
fi
HOST_COUNT=$(echo "$HOSTS" | wc -w | tr -d ' ')
log "Found $HOST_COUNT host(s): $(echo "$HOSTS" | tr '\n' ' ')"
endstep

# ---------------------------------------------------------------------------
# 2. Build current (before-update) configurations
# ---------------------------------------------------------------------------
for host in $HOSTS; do
  step "Building current $host"
  nix build ".#nixosConfigurations.$host.config.system.build.toplevel" \
    -o "result-before-$host" || true
  endstep
done

# ---------------------------------------------------------------------------
# 3. Update flake inputs
# ---------------------------------------------------------------------------
if [ "$SKIP_UPDATE" = false ]; then
  step "Updating flake inputs"
  nix flake update
  endstep
fi

# ---------------------------------------------------------------------------
# 4. Check whether flake.lock actually changed
# ---------------------------------------------------------------------------
step "Checking for flake.lock changes"
if git diff --quiet flake.lock 2>/dev/null; then
  ok "No changes — nothing to do."
  endstep
  exit 2
fi
ok "flake.lock has changed, rebuilding hosts."
endstep

# ---------------------------------------------------------------------------
# 5. Build updated configurations and generate per-host diffs
# ---------------------------------------------------------------------------
DIFF_REPORT=""
HAS_CHANGES=false
CHANGED_HOSTS=""
UNCHANGED_HOSTS=""

for host in $HOSTS; do
  step "Building updated $host"
  nix build ".#nixosConfigurations.$host.config.system.build.toplevel" \
    -o "result-after-$host" || true
  endstep

  if [ -e "result-before-$host" ] && [ -e "result-after-$host" ]; then
    step "Package diff for $host"
    HOST_DIFF=$(nix run nixpkgs#nvd -- diff "result-before-$host" "result-after-$host" 2>&1 || true)
    echo "$HOST_DIFF" >&2
    endstep

    if [ "$(readlink "result-before-$host")" != "$(readlink "result-after-$host")" ]; then
      HAS_CHANGES=true
      CHANGED_HOSTS="$CHANGED_HOSTS $host"
      DIFF_REPORT="${DIFF_REPORT}### ${host}"$'\n'"\`\`\`"$'\n'"${HOST_DIFF}"$'\n'"\`\`\`"$'\n\n'
    else
      UNCHANGED_HOSTS="$UNCHANGED_HOSTS $host"
    fi
  fi
done

# ---------------------------------------------------------------------------
# 6. Report results
# ---------------------------------------------------------------------------
step "Summary"
if [ -n "$CHANGED_HOSTS" ]; then
  ok "Changed:  $CHANGED_HOSTS"
fi
if [ -n "$UNCHANGED_HOSTS" ]; then
  log "Unchanged:$UNCHANGED_HOSTS"
fi

if [ "$HAS_CHANGES" = false ]; then
  fail "flake.lock changed but no host closures differ."
  endstep
  exit 3
fi

ok "Done — diff report ready."
endstep

# Print the diff report to stdout for consumers (CI, local review, etc.)
printf '%s' "$DIFF_REPORT"

How the script works

  1. Discover hosts — evaluates the flake to list all nixosConfigurations, filtering out any ending in -minimal (installer images, etc.).
  2. Build before — every host configuration is built from the current flake.lock and stored as a symlink result-before-<host>.
  3. Updatenix flake update pulls the latest nixpkgs, home-manager, and any other inputs (skipped with --skip-update).
  4. Check — if flake.lock is unchanged, the script exits early with code 2.
  5. Build after & diff — hosts are rebuilt with the updated lock file. For each host, nvd compares the before/after store paths and reports added, removed, and version-changed packages. The script distinguishes hosts whose closures actually changed from those that are identical despite the lock file update.
  6. Report — progress and a summary go to stderr; the machine-readable diff report goes to stdout for consumers (the CI workflow, a local terminal, etc.). If flake.lock changed but no host closures differ, the script exits with code 3.

Exit codes

CodeMeaningCI action
0At least one host has closure changesOpen PR with diff report
1Unexpected errorFail the job
2flake.lock did not change — nothing to updateSkip PR, job succeeds
3flake.lock changed but no host closures differSkip PR, job succeeds

Running locally

Because the script is independent of CI, you can run it on your workstation to preview what an update would change before committing anything:

terminal
# Full run — update flake.lock and diff all hosts
$ ./scripts/flake-update-diff.sh

# Diff only — you already ran nix flake update manually
$ ./scripts/flake-update-diff.sh --skip-update

Progress is printed to stderr, so the diff report on stdout can be piped or redirected:

terminal
$ ./scripts/flake-update-diff.sh > /tmp/diff-report.md

Step 2: The CI workflow

The workflow is a thin wrapper around the script. It handles SSH setup, git identity, and opening a pull request — all build and diff logic lives in the script above.

Create .forgejo/workflows/update.yaml:

yaml
name: Update Flake Inputs

on:
  schedule:
    - cron: "0 2 * * *" # Daily at 02:00 UTC — well before hosts auto-upgrade.
  workflow_dispatch: # Allow manual trigger from the Forgejo UI.

env:
  GIT_USER_NAME: ci # ⚠️ Replace with your CI bot name.
  GIT_USER_EMAIL: ci@example.com # ⚠️ Replace with your CI bot email.

jobs:
  update:
    name: Update flake.lock and diff all hosts
    runs-on: nixos-builder # A native NixOS runner — leverages the host Nix store.
    steps:
      - name: Checkout repository
        uses: https://data.forgejo.org/actions/checkout@v6

      # Write the deploy key so git can push branches and sign commits.
      - name: Configure SSH key for git push and commit signing
        run: |
          SSH_DIR="$RUNNER_TEMP/.ssh"
          mkdir -p "$SSH_DIR"
          echo "${{ secrets.GIT_PRIVATE_KEY }}" > "$SSH_DIR/forgejo_key"
          chmod 600 "$SSH_DIR/forgejo_key"
          SSH_BIN="$(command -v ssh)"
          export GIT_SSH_COMMAND="$SSH_BIN -i $SSH_DIR/forgejo_key -o StrictHostKeyChecking=no"
          echo "GIT_SSH_COMMAND=$GIT_SSH_COMMAND" >> "$FORGEJO_ENV"
          echo "SSH_KEY_PATH=$SSH_DIR/forgejo_key" >> "$FORGEJO_ENV"
          FORGEJO_DOMAIN="${FORGEJO_SERVER_URL#https://}"
          FORGEJO_DOMAIN="${FORGEJO_DOMAIN#http://}"
          echo "FORGEJO_DOMAIN=$FORGEJO_DOMAIN" >> "$FORGEJO_ENV"

      # Configure the CI bot identity and SSH commit signing.
      - name: Configure git identity, signing, and SSH remote
        run: |
          git config user.name "${{ env.GIT_USER_NAME }}"
          git config user.email "${{ env.GIT_USER_EMAIL }}"
          git config gpg.format ssh
          git config user.signingkey "$SSH_KEY_PATH"
          git config commit.gpgsign true
          git remote set-url origin git@${{ env.FORGEJO_DOMAIN }}:${{ forgejo.repository }}.git

      # Run the diff script. Exit codes 2 and 3 are expected
      # "no-op" conditions — only code 0 means a PR is needed.
      - name: Update flake inputs and diff all hosts
        id: diff
        run: |
          DIFF_REPORT=$(bash ./scripts/flake-update-diff.sh) && STATUS=0 || STATUS=$?

          case "$STATUS" in
            0)
              echo "has_changes=true" >> "$FORGEJO_OUTPUT"
              {
                echo "report<<DIFF_EOF"
                printf '%s\n' "$DIFF_REPORT"
                echo "DIFF_EOF"
              } >> "$FORGEJO_OUTPUT"
              ;;
            2) echo "No flake.lock changes, skipping PR." ;;
            3) echo "No host closure changes, skipping PR." ;;
            *) exit "$STATUS" ;;
          esac

      # Commit the updated flake.lock and open a PR with the diff report.
      - name: Create branch, commit, and open pull request
        if: steps.diff.outputs.has_changes == 'true'
        env:
          DIFF_REPORT: ${{ steps.diff.outputs.report }}
          FORGEJO_TOKEN: ${{ forgejo.token }}
        run: |
          DATE=$(date +%Y-%m-%d)
          BRANCH="auto/flake-update-$DATE"

          git push origin --delete "$BRANCH" || true
          git checkout -b "$BRANCH"
          git add flake.lock
          git commit -m "chore: update flake inputs $DATE"
          git push origin "$BRANCH"

          # jq --arg safely handles all escaping (backticks, newlines, quotes)
          PAYLOAD=$(nix run nixpkgs#jq -- -n \
            --arg title "chore: update flake inputs $DATE" \
            --arg head  "$BRANCH" \
            --arg base  "main" \
            --arg diff  "$DIFF_REPORT" \
            '{
              title: $title, head: $head, base: $base,
              body: "## Automated flake.lock update\n\nPackage changes per host (via nvd):\n\n\($diff)\n---\n*Auto-generated by the update workflow.*"
            }')

          nix run nixpkgs#curl -- -sf -X POST \
            -H "Authorization: token $FORGEJO_TOKEN" \
            -H "Content-Type: application/json" \
            "${{ forgejo.server_url }}/api/v1/repos/${{ forgejo.repository }}/pulls" \
            -d "$PAYLOAD"

      - name: Cleanup
        if: always()
        run: |
          rm -f "$SSH_KEY_PATH"
          rm -f result-before-* result-after-*

Compared to inlining all the build and diff logic in the workflow, this split has two advantages: the script can be run locally to preview updates before committing, and the workflow YAML stays focused on CI plumbing (SSH, git, PR creation) rather than build logic.

Where it runs

The workflow runs on a NixOS host (runner label nixos-builder), not inside a container. This is important for two reasons:

  • It needs a working Nix installation with direct access to /nix/store. The host’s Nix store acts as a persistent build cache — derivations that haven’t changed since the last run are already in the store and don’t need to be rebuilt or downloaded again.
  • It builds full NixOS system configurations (system.build.toplevel), which are large closures that benefit greatly from an existing store.

You can run this workflow in a container (e.g. a Docker-based Forgejo runner with Nix installed), but each run would start with a cold Nix store. That means every derivation is fetched or built from scratch, turning a job that takes minutes on a NixOS host into one that can take significantly longer. If you go the container route, mounting a persistent volume for /nix/store and /nix/var/nix/db helps, but a native NixOS runner remains the most efficient option.

If you already run a Forgejo runner on a NixOS machine, point this workflow at it. Otherwise, register a new runner on any NixOS host with the label nixos-builder.

Secrets and configuration

The workflow needs one secret. Most configuration is derived automatically from Forgejo context variables.

Required secrets

SecretPurpose
GIT_PRIVATE_KEYSSH private key used for two things: pushing the update branch (git push) and signing the commit (gpg.format ssh). The key must have write access to the repository. Store it in Forgejo → Repository Settings → Secrets.

The FORGEJO_TOKEN used to create the pull request via the API is provided automatically by Forgejo (${{ forgejo.token }}). No manual setup is required.

Values to customize

Only two values are hardcoded in the YAML — everything else (remote URL, API endpoint) is derived from Forgejo context variables (forgejo.server_url, forgejo.repository):

ValueWhere in the YAMLWhat to set
Git identityenv.GIT_USER_NAME / env.GIT_USER_EMAILThe name and email for CI commits
Runner labelruns-on: nixos-builderMust match your registered NixOS runner

Optional / tunable

FieldDefaultNotes
cron: schedule0 2 * * * (02:00 UTC)Adjust to any cron expression that suits your timezone or review habits
workflow_dispatchenabledAllows manual triggering from the Forgejo UI — remove the key if not wanted
Host filterexcludes *-minimal hostsThe builtins.filter in the script skips hosts matching .*-minimal — adjust the regex in scripts/flake-update-diff.sh to match your naming convention
PR base branchmainChange if your default branch has a different name

What the PR looks like

The resulting pull request body contains a per-host section like this:

text
## Automated flake.lock update

Package changes per host (via nvd):

### webserver

  [nvd output showing package upgrades, additions, and removals]

### devbox

  [nvd output for this host]

Hosts whose closures did not change (despite the flake.lock update) are omitted from the report. You review the PR, see exactly which packages changed on which host, and merge when ready. Nothing reaches your machines until you merge to main.

Tip: fully hands-off with auto-merge. If you trust the CI builds and don’t want to review every update manually, most Git forges (Forgejo, GitHub, GitLab) support auto-merging PRs once all required checks pass. Enable branch protection with a required status check for the build job, then configure auto-merge on the PR. The PR will merge itself as soon as CI is green — turning the entire pipeline into a zero-touch flow where hosts upgrade daily without any human interaction. You can still review the merged diff after the fact and roll back if needed.

Step 3: The auto-upgrade NixOS module

Create a module that wraps system.autoUpgrade so hosts pull from your Git repository on a schedule:

nix
# modules/auto-upgrade.nix
{
  config,
  lib,
  ...
}:
let
  hostname = config.networking.hostName;
in
{
  options.services.auto-upgrade.enable =
    lib.mkEnableOption "automatic daily flake-based NixOS upgrade";

  config = lib.mkIf config.services.auto-upgrade.enable {
    system.autoUpgrade = {
      enable = true;

      # Fetch the latest main branch from your Git server.
      # Each host selects its own nixosConfiguration by hostname.
      flake = "git+https://your-forgejo/nix/nixos.git#${hostname}";

      # Never update flake.lock on the host — CI handles that.
      flags = [
        "--no-update-lock-file"
      ];

      # Run daily at 05:00 (host-local time), one hour after CI.
      dates = "05:00";

      # Do not reboot automatically.
      # Most services restart on nixos-rebuild switch.
      allowReboot = false;

      # "switch" activates immediately.
      # Use "boot" to defer activation until next reboot.
      operation = "switch";
    };
  };
}

Key design decisions:

  • --no-update-lock-file — the most important flag. Hosts consume whatever flake.lock is on main. They never resolve inputs independently, so every machine converges on the same package set.
  • flake = "git+https://...#${hostname}" — each host selects its own nixosConfiguration output by hostname. One repository, many machines.
  • dates = "05:00" — staggered three hours after the CI runs at 02:00 UTC. This gives you a window to review the PR. If it is not merged yet, hosts simply rebuild from the current main (a no-op if nothing changed).
  • allowReboot = false — most NixOS services restart on switch. Set to true if you need kernel or initrd updates to take effect immediately.

Step 4: Enable on each host

Include the module in your flake and enable it per host:

nix
# flake.nix (simplified)
{
  outputs = { nixpkgs, ... }: {
    nixosConfigurations = {
      webserver = nixpkgs.lib.nixosSystem {
        modules = [
          ./modules/auto-upgrade.nix
          ./hosts/webserver
        ];
      };

      devbox = nixpkgs.lib.nixosSystem {
        modules = [
          ./modules/auto-upgrade.nix
          ./hosts/devbox
        ];
      };
    };
  };
}

Then in each host’s configuration:

nix
# hosts/webserver/default.nix
{
  services.auto-upgrade.enable = true;
}

For machines you deploy manually (like the CI builder itself, or test machines), simply omit the option or set it to false.

Step 5: Verify the setup

After deploying the configuration to your hosts, check that the systemd timer and service are in place:

terminal
# Check the timer schedule and when it last fired
$ systemctl status nixos-upgrade.timer
● nixos-upgrade.timer
     Loaded: loaded
     Active: active (waiting)
    Trigger: tomorrow at 05:00

# Check the last upgrade run
$ journalctl -u nixos-upgrade.service -n 30

The nixos-upgrade.service logs show the full nixos-rebuild switch output — which generation was activated, which services restarted, and any errors.

The full flow

Here is what happens every day without any manual intervention:

text
02:00 UTC  Forgejo Actions runs on CI builder
           ├─ scripts/flake-update-diff.sh
           │   ├─ Discover hosts, build before
           │   ├─ nix flake update
           │   ├─ Build after, nvd diff per host
           │   └─ Report (stdout → workflow)
           └─ Open PR with diff report

   You     Review PR, check package changes, merge to main

05:00      Each NixOS host (systemd timer)
           ├─ git fetch main (via flake URL)
           ├─ nix build own configuration
           └─ nixos-rebuild switch

If you don’t merge the PR before 05:00, hosts simply rebuild from the current main — effectively a no-op. The update waits until you merge.

Rollback

If a bad update slips through, NixOS makes rollback trivial:

terminal
# Roll back to the previous generation
$ sudo nixos-rebuild switch --rollback

# Or boot into a previous generation from the bootloader

Every generation is kept in the Nix store until garbage-collected, so you can always go back.

Adapting the schedule

What to changeWhereDefault
CI update time.forgejo/workflows/update.yamlcron:0 2 * * * (02:00 UTC)
Host upgrade timemodules/auto-upgrade.nixdates05:00
Auto-rebootmodules/auto-upgrade.nixallowRebootfalse
Upgrade operationmodules/auto-upgrade.nixoperationswitch

For desktops you might prefer operation = "boot" so the new configuration only activates on the next reboot, avoiding disruption during work hours. For servers, switch is usually the right choice since services restart gracefully.

Why this works well

  • No unreviewed changes reach production. The PR gate means you always see what changed before it deploys.
  • Hosts never drift from each other. Every machine builds from the same flake.lock on main.
  • Zero manual SSH sessions. Once the module is enabled, upgrades are fully automatic.
  • Safe by default. If CI fails to build a host, the PR shows the error. If a host fails to build, it stays on its current generation. If you merge something bad, nixos-rebuild switch --rollback fixes it in seconds.
  • Works for any fleet size. Whether you run two machines or twenty, the same workflow scales — one PR, one merge, all hosts converge.