Kamal Kitchen Sink

Episode #481 by Teacher's Avatar David Kimura

Summary

In this episode, we look at creating an entire infrastructure (proxy, load balancer, app servers, worker servers, database server, and a storage server) on our own hardware use Kamal to provision and deploy our Ruby on Rails application.
rails deploy kamal 31:57

Chapters

  • Introduction (0:00)
  • Creating the Infrastructure (5:14)
  • Installing Cloudflare Tunnel (7:25)
  • Setting up Kamal (8:51)
  • Load Balancer Accessory (9:56)
  • PostgreSQL Accessory (10:47)
  • MinIO Accessory (12:41)
  • Load Balancer Config (14:44)
  • Testing out the deployment (17:21)
  • Creating MinIO Bucket (18:20)
  • Waiting on Kamal deploy to finish (18:34)
  • Troubleshooting the first deployment (19:10)
  • Reloading accessory config (20:05)
  • Displaying hostnames and redeploying (21:00)
  • Proxmox backups (22:13)
  • Managing Resources (23:34)
  • Adding ActionText and a Scaffold (24:15)
  • Configuring ActiveStorage with MinIO (25:41)
  • Adding MissionControl Jobs (27:42)
  • Creating a WebSocket (28:39)
  • Final Thoughts (31:06)

Resources

Download Source Code

Summary

# Terminal
kamal setup
kamal accessory reboot loadbalancer
kamal deploy
rails action_text:install
rails g scaffold posts title content:rich_text
rails g job time_broadcast
bundle add aws-sdk-s3
bundle add mission_control-jobs

# credentials
secret_key_base: SECRET_KEY_BASE
postgres:
  password: PASSWORD
minio:
  root_user: minio
  root_password: PASSWORD
  endpoint: https://minio.DOMAIN

# Proxmox

TEMPLATE="local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst"
SWAP="512"
PASSWORD="Password123"
START="0"
FEATURES="nesting=1"
SSH_KEY="/root/.ssh/id_kobaltz.pub"
BRIDGE="vmbr0"
GATEWAY="192.168.1.1"
IP_PREFIX="192.168.1"
POOL="example"

pvesh create /pools -poolid $POOL -comment "Resource Pool for Example"

containers=(
    "111 cloudflared 2 2048 10"
    "112 loadbalancer 2 2048 10"
    "113 app1 2 4096 16"
    "114 app2 2 4096 16"
    "115 worker1 2 4096 16"
    "116 worker2 2 4096 16"
    "117 database 2 8192 256"
    "118 minio 2 8192 256"
)

for container in "${containers[@]}"; do
    read -r ID HOSTNAME CORES MEMORY ROOTFS <<< "$container"
    IP="$IP_PREFIX.$ID"
    pct create "$ID" "$TEMPLATE" \
        --swap "$SWAP" \
        --password "$PASSWORD" \
        --start "$START" \
        --hostname "$HOSTNAME" \
        --features "$FEATURES" \
        --ssh-public-keys "$SSH_KEY" \
        --cores "$CORES" \
        --memory "$MEMORY" \
        --rootfs "local-lvm:$ROOTFS" \
        --net0 "name=eth0,bridge=$BRIDGE,ip=$IP/24,gw=$GATEWAY" \
        --pool "$POOL"
done

for ct in {111..118}; do
  pct exec $ct -- bash -c "apt update -y && apt upgrade -y && apt install fail2ban -y"
done

# config/deploy.yml
# Name of your application. Used to uniquely configure containers.
service: example

# Name of the container image.
image: kobaltz/example

# Deploy to these servers.
servers:
  web:
    - 192.168.1.113
    - 192.168.1.114
  job:
    hosts:
      - 192.168.1.115
      - 192.168.1.116
    cmd: bin/jobs


# Enable SSL auto certification via Let's Encrypt and allow for multiple apps on a single web server.
# Remove this section when using multiple web servers and ensure you terminate SSL at your load balancer.
#
# Note: If using Cloudflare, set encryption mode in SSL/TLS setting to "Full" to enable CF-to-app encryption.
proxy:
  ssl: false
  host: www.railsenv.com

# Credentials for your image host.
registry:
  # Specify the registry server, if you're not using Docker Hub
  # server: registry.digitalocean.com / ghcr.io / ...
  username: kobaltz

  # Always use an access token rather than real password when possible.
  password:
    - KAMAL_REGISTRY_PASSWORD

# Inject ENV variables into containers (secrets come from .kamal/secrets).
env:
  secret:
    - RAILS_MASTER_KEY
  # clear:
    # Run the Solid Queue Supervisor inside the web server's Puma process to do jobs.
    # When you start using multiple servers, you should split out job processing to a dedicated machine.
    # SOLID_QUEUE_IN_PUMA: true

    # Set number of processes dedicated to Solid Queue (default: 1)
    # JOB_CONCURRENCY: 3

    # Set number of cores available to the application on each server (default: 1).
    # WEB_CONCURRENCY: 2

    # Match this to any external database server to configure Active Record correctly
    # Use example-db for a db accessory server on same machine via local kamal docker network.
    # DB_HOST: 192.168.1.2

    # Log everything from Rails
    # RAILS_LOG_LEVEL: debug

# Aliases are triggered with "bin/kamal <alias>". You can overwrite arguments on invocation:
# "bin/kamal logs -r job" will tail logs from the first server in the job section.
aliases:
  console: app exec --interactive --reuse "bin/rails console"
  shell: app exec --interactive --reuse "bash"
  logs: app logs -f
  dbc: app exec --interactive --reuse "bin/rails dbconsole"


# Use a persistent storage volume for sqlite database files and local Active Storage files.
# Recommended to change this to a mounted volume path that is backed up off server.
volumes:
  - "example_storage:/rails/storage"


# Bridge fingerprinted assets, like JS and CSS, between versions to avoid
# hitting 404 on in-flight requests. Combines all files from new and old
# version inside the asset_path.
asset_path: /rails/public/assets

# Configure the image builder.
builder:
  arch: amd64

  # # Build image via remote server (useful for faster amd64 builds on arm64 computers)
  # remote: ssh://docker@docker-builder-server
  #
  # # Pass arguments and secrets to the Docker build process
  # args:
  #   RUBY_VERSION: ruby-3.3.5
  # secrets:
  #   - GITHUB_TOKEN
  #   - RAILS_MASTER_KEY

# Use a different ssh user than root
# ssh:
#   user: app

# Use accessory services (secrets come from .kamal/secrets).
accessories:
  loadbalancer:
    image: nginx:latest
    host: 192.168.1.112
    port: "80:80"
    files:
      - config/nginx.conf:/etc/nginx/conf.d/default.conf

  postgres:
    image: postgres:17
    port: 5432:5432
    host: 192.168.1.117
    env:
      clear:
        POSTGRES_USER: example
        POSTGRES_DB: example_production
      secret:
        - POSTGRES_PASSWORD
    directories:
      - data:/var/lib/postgresql/data

  minio:
    image: minio/minio
    host: 192.168.1.118
    options:
      publish:
        - "9000:9000"
        - "9001:9001"
    env:
      secret:
        - MINIO_ROOT_USER
        - MINIO_ROOT_PASSWORD
    directories:
      - data:/data
    cmd: server /data --console-address ":9001"
#   db:
#     image: mysql:8.0
#     host: 192.168.0.2
#     # Change to 3306 to expose port to the world instead of just local network.
#     port: "127.0.0.1:3306:3306"
#     env:
#       clear:
#         MYSQL_ROOT_HOST: '%'
#       secret:
#         - MYSQL_ROOT_PASSWORD
#     files:
#       - config/mysql/production.cnf:/etc/mysql/my.cnf
#       - db/production.sql:/docker-entrypoint-initdb.d/setup.sql
#     directories:
#       - data:/var/lib/mysql
#   redis:
#     image: redis:7.0
#     host: 192.168.0.2
#     port: 6379
#     directories:
#       - data:/data

# config/nginx.conf
upstream backend {
  server 192.168.1.113;
  server 192.168.1.114;
}

server {
  listen 80;
  location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    proxy_pass http://backend;
  }

  location /cable {
    proxy_pass http://backend/cable;
    proxy_http_version 1.1;
    proxy_set_header Upgrade websocket;
    proxy_set_header Connection Upgrade;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  }
}

# .kamal/secrets
# Grab the registry password from ENV
KAMAL_REGISTRY_PASSWORD=$KAMAL_REGISTRY_PASSWORD

# Improve security by using a password manager. Never check config/master.key into git!
RAILS_MASTER_KEY=$(cat config/master.key)
POSTGRES_PASSWORD=$(bin/rails runner "puts Rails.application.credentials.dig(:postgres, :password)")
MINIO_ROOT_USER=$(bin/rails runner "puts Rails.application.credentials.dig(:minio, :root_user)")
MINIO_ROOT_PASSWORD=$(bin/rails runner "puts Rails.application.credentials.dig(:minio, :root_password)")

# config/storage.yml
minio:
  service: S3
  access_key_id: <%= Rails.application.credentials.minio.root_user %>
  secret_access_key: <%= Rails.application.credentials.minio.root_password %>
  region: us-east-1
  bucket: example
  endpoint: <%= Rails.application.credentials.minio.endpoint %>
  force_path_style: true

# config/environments/production.rb
config.active_storage.service = :minio

# config/routes.rb
mount MissionControl::Jobs::Engine, at: "/jobs", as: :jobs

# app/views/layouts/_navigation_links.html.erb
<%= turbo_stream_from :time %>

<li class="nav-item me-4">
  <%= link_to "", "#", id: :time, class: 'nav-link' %>
</li>

# app/jobs/time_broadcast_job.rb
class TimeBroadcastJob < ApplicationJob
  queue_as :default

  def perform
    current_time = Time.current.strftime("%-I:%M:%S")
    Turbo::StreamsChannel.broadcast_replace_to(
      :time,
      target: "time",
      html: "<a href='#' id='time' class='nav-link'>#{current_time}</a>"
    )
  end
end

# config/recurring.yml
time_broadcast:
  class: TimeBroadcastJob
  queue: background
  schedule: every second