Skip to content

Hosting & Deployment Architecture

Single Hetzner VPS running NixOS. Infrastructure services (PostgreSQL, Redis, Caddy) managed declaratively by NixOS. Application code (MedusaJS, Next.js storefront) deployed as Docker containers via virtualisation.oci-containers. CI/CD through GitHub Actions. GitHub Flow branching.


1. Architecture Overview

                         Internet
                            |
                    +-------+-------+
                    |   Cloudflare  |
                    |  DNS + CDN    |
                    +---+-------+---+
                        |       |
          (proxied)     |       |     (DNS-only, not proxied)
                        v       |
              +---------+--+    |    +------------------+
              | Hetzner VPS|    +--->| shops-btc-01     |
              | (NixOS)    |         | (NixOS)          |
              |            |         |                  |
              | +--------+ |         | BTCPay Server    |
              | | Caddy  | |         | CLN + bitcoind   |
              | +--+--+--+ |         +------------------+
              |    |  |     |         btcpay.research-relay.com
              |    |  |     |
              |    v  v     |
              | :9000 :8000 |
              | +--+ +----+ |
              | |MS| | SF | |  MS = medusa-server (Docker)
              | +--+ +----+ |  MW = medusa-worker (Docker)
              | +--+        |  SF = storefront (Docker)
              | |MW|        |
              | +--+        |
              |             |
              | PostgreSQL  |  NixOS-managed services
              | Redis       |  (not in Docker)
              | Caddy       |
              +-------------+

Domains:
  research-relay.com       -> Storefront (port 8000)
  api.research-relay.com   -> Medusa API + Admin (port 9000)
  btcpay.research-relay.com -> shops-btc-01 (separate server)

Why this split

  • Infrastructure services are stable -- PostgreSQL, Redis, and Caddy rarely change. NixOS manages them declaratively with automatic restarts, log rotation, and config validation.
  • App code changes constantly -- Docker containers are easy to update: pull a new image, restart the container. No NixOS rebuild needed for app deploys.
  • NixOS oci-containers integrates containers with systemd, giving you restart policies, journalctl logging, and dependency ordering for free.

2. Infrastructure

Hetzner VPS

Spec Value
Model CX33 (shared vCPU, cost-optimized)
CPU 4 vCPU
RAM 8 GB
Storage 80 GB SSD
Location Helsinki, Finland (HEL1)
Cost ~EUR 5.49/mo (~$6/mo)
OS NixOS (installed via nixos-anywhere)

8 GB RAM is comfortable for PostgreSQL + Redis + three Docker containers. If you need more headroom later, Hetzner lets you resize in-place.

NixOS-Managed Services

All configured declaratively in the server's NixOS configuration (configuration.nix):

Service NixOS Option Notes
PostgreSQL 16 services.postgresql ensureDatabases = [ "research-relay" ], ensureUsers for medusa
Redis 7 services.redis.servers.default Bound to 127.0.0.1:6379
Caddy services.caddy Reverse proxy: research-relay.com -> :8000, api.research-relay.com -> :9000. Auto-HTTPS via ACME.
Firewall networking.firewall Allow only ports 80, 443, 22
Backups services.postgresqlBackup Nightly pg_dump at 3 AM to /var/backup/postgresql

Upload backups to Backblaze B2 via a systemd timer using rclone or the b2 CLI.

Docker Containers (via oci-containers)

Defined in virtualisation.oci-containers.containers:

Container Image Port Key Env Vars
medusa-server ghcr.io/scientific-oops/rr-bizops/medusa:latest 9000 MEDUSA_WORKER_MODE=server
medusa-worker ghcr.io/scientific-oops/rr-bizops/medusa:latest none MEDUSA_WORKER_MODE=worker, DISABLE_MEDUSA_ADMIN=true
storefront ghcr.io/scientific-oops/rr-bizops/storefront:latest 8000 NEXT_PUBLIC_MEDUSA_BACKEND_URL=https://api.research-relay.com

All containers load secrets from environmentFiles = [ "/etc/rr-bizops/.env" ]. Medusa containers use --network=host so they can reach NixOS-managed PostgreSQL and Redis on localhost. The storefront does not need host networking since it talks to Medusa over HTTPS.

Each container gets a systemd unit (docker-medusa-server.service, etc.) with automatic restarts and journalctl logging.


3. Deployment Pipeline

Branching: GitHub Flow

  • main is always deployable. Merges to main auto-deploy to production.
  • All work happens in short-lived feature branches with pull requests.
  • No develop, staging, or release branches. Add a staging environment later if needed.

CI: On Pull Request

Workflows are path-filtered so storefront changes don't trigger MedusaJS checks and vice versa.

MedusaJS (app/ changes)

- npm ci
- npx tsc --noEmit
- npm run test:unit

Storefront (storefront/ changes)

- npm ci
- npm run lint
- npm run typecheck
- npm run build

CD: On Merge to Main

name: Deploy
on:
  push:
    branches: [main]

jobs:
  build-medusa:
    if: # paths include app/**
    steps:
      - Build Docker image with multi-stage Dockerfile
      - Tag with git SHA + "latest"
      - Push to ghcr.io/scientific-oops/rr-bizops/medusa

  build-storefront:
    if: # paths include storefront/**
    steps:
      - Build Docker image with multi-stage Dockerfile
      - Tag with git SHA + "latest"
      - Push to ghcr.io/scientific-oops/rr-bizops/storefront

  deploy:
    needs: [build-medusa, build-storefront]
    steps:
      - SSH to VPS
      - docker pull ghcr.io/scientific-oops/rr-bizops/medusa:latest
      - docker pull ghcr.io/scientific-oops/rr-bizops/storefront:latest
      - systemctl restart docker-medusa-server
      - systemctl restart docker-medusa-worker
      - systemctl restart docker-storefront

The Medusa server container runs medusa db:migrate on startup (via the Dockerfile entrypoint), so migrations are applied automatically on deploy.

Rollback

  • Every image is tagged with both latest and the git SHA (e.g., ghcr.io/.../medusa:abc123f).
  • To roll back: Update the NixOS container config to pin a specific SHA tag, then nixos-rebuild switch. Or SSH in and docker pull the specific tagged image and restart.
  • Database rollback: Restore from the nightly pg_dump backup. For migration-level rollback, use medusa db:rollback <module>.

4. Domain & DNS (Cloudflare)

All DNS managed in Cloudflare. Domain registered via Cloudflare Registrar.

Record Type Value Proxied Purpose
research-relay.com A VPS IP Yes Storefront
www.research-relay.com CNAME research-relay.com Yes Redirect to apex
api.research-relay.com A VPS IP Yes Medusa API + Admin
btcpay.research-relay.com A shops-btc-01 IP No BTCPay Server

BTCPay is not proxied through Cloudflare because it requires direct WebSocket connections for Lightning Network compatibility. TLS for BTCPay is handled by Let's Encrypt on shops-btc-01 via nginx/ACME.

Cloudflare SSL mode: Full (Strict). Caddy handles origin certificates automatically via ACME.


5. Environment Management

Environment Stack Config
Development devenv up (local PostgreSQL 16 + Redis 7 + Node.js 22) .env in repo root (gitignored)
Production NixOS + Docker on Hetzner VPS /etc/rr-bizops/.env on server

Secrets

  • CI/CD secrets: Stored in GitHub Secrets (repo settings). Used by GitHub Actions for GHCR auth and SSH deploy.
  • Production secrets: /etc/rr-bizops/.env on the VPS, owned by root, chmod 600. Contains DATABASE_URL, REDIS_URL, JWT_SECRET, COOKIE_SECRET, BTCPay keys, etc.
  • Generate secrets: openssl rand -hex 32

Required GitHub Secrets:

DEPLOY_SSH_KEY        - SSH private key for VPS access
DEPLOY_HOST           - VPS IP or hostname
GHCR_TOKEN            - GitHub token for container registry (or use GITHUB_TOKEN)


6. Monitoring (Minimum Viable)

Need Tool Cost Notes
Uptime UptimeRobot Free Monitor research-relay.com and api.research-relay.com/health
Errors Sentry Free tier 5K errors/mo. Add @sentry/node to Medusa, @sentry/nextjs to storefront
Logs journalctl Free journalctl -u docker-medusa-server -f for live tailing
Metrics None (for now) Free Add Prometheus + Grafana when you actually need dashboards
Backups pg_dump + Backblaze B2 ~$0.50/mo Nightly dumps, 30-day retention

Health check endpoints

  • Medusa: GET https://api.research-relay.com/health (built-in)
  • Storefront: GET https://research-relay.com (check for 200)

7. Cost Summary

Item Monthly
Hetzner CX33 ~$7
Domain (research-relay.com, amortized) ~$1
Backblaze B2 backups ~$0.50
Cloudflare Free
GitHub Actions Free (2,000 min/mo)
GHCR Free (500 MB storage for public repos)
Sentry Free
UptimeRobot Free
Total ~$8.50/mo

8. Dockerfiles

Both use multi-stage builds (node:22-alpine): deps stage, build stage, runner stage. Key details:

MedusaJS (app/Dockerfile): Runs npm run build which outputs to .medusa/server/. Entrypoint runs npx medusa db:migrate && npx medusa start. Exposes port 9000.

Storefront (storefront/Dockerfile): Requires build args NEXT_PUBLIC_MEDUSA_BACKEND_URL and NEXT_PUBLIC_MEDUSA_PUBLISHABLE_KEY (Next.js bakes public env vars at build time). Uses Next.js standalone output mode. Runs node server.js. Exposes port 8000.


9. Future Scaling Path

Do not do any of this until you have evidence you need it:

  1. Separate the database -- Move PostgreSQL to its own VPS, or switch to managed (Neon, Supabase). Do this when DB CPU or memory contends with the app containers.
  2. Add a second app VPS -- Put a Hetzner load balancer in front of two app servers. Do this when a single VPS can't handle request volume.
  3. Move storefront to the edge -- Deploy Next.js to Cloudflare Pages or Vercel for edge caching and zero-config scaling. Do this when storefront latency matters more than simplicity.
  4. Redis HA -- Add Redis Sentinel or switch to KeyDB. Do this when Redis downtime is unacceptable (it won't be for a while).
  5. Staging environment -- Clone the VPS config to a second Hetzner instance. Do this when you have more than 2 developers or need to test migrations safely.

Right now, a single $7/mo VPS handles everything. Start here.