I've been using Kamal to deploy Laravel apps since v1, so when v2 came out, I upgraded all my apps to it... and it was a breeze. I thought about updating the video, but there were no hiccups in my experience or things to watch out for... It Just Worked™. That being said, I'm still going to record a newer video on it for Laravel apps soon, mainly because the previous one was still on the old name of the tool (MRKS).
Until I have time to work on the video, here's a short introduction to Kamal 2.0 for Laravel apps.
Until I have time to work on the video, here's a short introduction to Kamal 2.0 for Laravel apps.
Installation
Kamal is mainly distributed as a RubyGem, so to install it locally you need Ruby installed. Once you have that, you may install it like:
gem install kamal
They also distribute it as a Docker image, so you don't necessarily need to have Ruby installed on your system if you don't want to, you can add an alias like this to your ~/.bashrc or equivalent on your system:
alias kamal='docker run -it --rm -v "${PWD}:/workdir" -v "${SSH_AUTH_SOCK}:/ssh-agent" -v /var/run/docker.sock:/var/run/docker.sock -e "SSH_AUTH_SOCK=/ssh-agent" ghcr.io/basecamp/kamal:latest'
This is the alias for Linux systems, check the docs to see the ones for other OS.
I like this as an alternative, and I'm pushing to a similar path for Takeout, but for Kamal, I find the alias route a bit limiting. I've had issues with it not persisting my Registry authentication across commands or not having access to my environment variables to feed the secrets with (or access to the 1password CLI client, for instance). You can adapt the alias to suit your needs, but I prefer the RubyGem installation.
Once we install it, we can run `kamal init` and it should create a `config/deploy.yml` file as well as a `.kamal/` directory with the `.kamal/secrets` file and some hooks examples.
Preparing the App
The only thing Kamal requires from your app is a Dockerfile. At least until the support for Cloud Native BuildPacks gets approved and merged. I've talked about Cloud Native BuildPacks and Laravel before, I think that's a great addition for other stacks using Kamal. However, I think the best way forward would be to have production-ready Dockerfile shipping with the framework like Rails does. Since we don't have that, we can use the ServerSideUp PHP images in our Laravel apps.
Setting up a production-ready Dockerfile is outside of the scope of this post, but here's an example. I've used this Dockerfile before, but feel free to adapt it to your application's needs.
With the Dockerfile ready, we can start configuring our `config/deploy.yml` file. Here's an example:
Setting up a production-ready Dockerfile is outside of the scope of this post, but here's an example. I've used this Dockerfile before, but feel free to adapt it to your application's needs.
With the Dockerfile ready, we can start configuring our `config/deploy.yml` file. Here's an example:
service: chirper image: tonysm/chirper-app-kamal-example servers: web: - 101.10.10.10 cron: hosts: - 101.10.10.10 cmd: php artisan schedule:work options: health-cmd: healthcheck-schedule proxy: ssl: true host: chirper.turbo-laravel.com app_port: 8080 registry: username: tonysm password: - KAMAL_REGISTRY_PASSWORD builder: arch: amd64 env: clear: APP_NAME: "Chirper" APP_ENV: "production" APP_DEBUG: false APP_URL: "https://chirper.turbo-laravel.com" ASSET_URL: "https://chirper.turbo-laravel.com" DB_CONNECTION: "sqlite" DB_DATABASE: "/dbs/chirper.db" STORAGE_MEDIA_DISK_PATH: "/app-media/" LOG_CHANNEL: "stderr" LOG_DEPRECATIONS_CHANNEL: "null" LOG_LEVEL: "debug" MAIL_MAILER: "log" BROADCAST_DRIVER: "log" FILESYSTEM_DISK: "local" QUEUE_CONNECTION: "database" SESSION_DRIVER: "database" SESSION_LIFETIME: "120" REDIS_HOST: "chirper-redis" secret: - APP_KEY aliases: shell: app exec --interactive --reuse "sh" tinker: app exec --interactive --reuse "php artisan tinker" volumes: - "dbs:/dbs/" - "storage:/app-media/" asset_path: /var/www/html/public/build accessories: redis: image: valkey/valkey:8 host: 101.10.10.10 directories: - redis-data:/data
In this example, we're setting up web and cron roles. Web will handle HTTP requests, while cron will execute Laravel's Task Scheduler for us.
By default, Kamal will make sure there's a kamal-proxy service running on each host of the web role. This proxy will route requests to the specific containers. When we're deploying a new version of our application, it switches to send requests to the new container. It also allows us to deploy multiple applications on the same host. When we're deploying a second application (with its own Dockerfile, accessories, deploy cadence, and so on), Kamal will reuse the same kamal-proxy that is already installed on the host.
We can create as many roles as we want. A typical web application will have, at least, three entry points: web, worker, and a scheduler.
With Kamal, we'll be able to deploy our roles to the same host. But differently than other container orchestration tools, we won't be able to run multiple containers of the same role in the same host. Instead, Kamal was designed to deploy apps to their hosts, much like Capistrano. So, instead of scaling the number of containers of your web role, you can instruct the web container to use all the cores available. Similarly, the worker containers should do the same, inside the container, you can scale the number of worker processes for each worker container based on the number of processes and other apps deployed on that host.
Proxy
As I mentioned earlier, Kamal will ensure a kamal-proxy service container is running on the host of web roles (and in other roles if you opt-in). In the `config/deploy.yml` you'll see the proxy settings. We're mapping a domain, telling Kamal we want an SSL cert (it will issue one via Let's Encrypt), and we're telling it to route requests for that domain to port 8080 on our web container.
To know more about the proxy, head out to the documentation, and also check out Kevin McConnell's talk at RailsWorld 2024 about it, there's a lot more we can do with kamal-proxy, things that are not yet available via Kamal, like roll-out deployments.
To know more about the proxy, head out to the documentation, and also check out Kevin McConnell's talk at RailsWorld 2024 about it, there's a lot more we can do with kamal-proxy, things that are not yet available via Kamal, like roll-out deployments.
Registry
As of the time of this writing, Kamal depends on having a registry to push your images to, and those are typically private images, which adds some costs to the process. However, there are ways to use your own registry, and they're working on dropping this dependency, which will make it even easier (and cheaper!) to deploy your apps with Kamal.
When we're deploying, Kamal will build our image using our Dockerfile, login to the registry (both locally and on all hosts) using our username and password (this one comes from secrets, more about that soon), and then push the image to the registry and pulls it from there on the hosts. Then, it will roll the release by spinning up a new container, running health checks, switching the traffic to it in the kamal-proxy (if all went well with the health checks), and finally stopping the previous container if all goes well.
When we're deploying, Kamal will build our image using our Dockerfile, login to the registry (both locally and on all hosts) using our username and password (this one comes from secrets, more about that soon), and then push the image to the registry and pulls it from there on the hosts. Then, it will roll the release by spinning up a new container, running health checks, switching the traffic to it in the kamal-proxy (if all went well with the health checks), and finally stopping the previous container if all goes well.
Asset Bridging
Kamal was built for web applications. Web applications typically have assets (JavaScript, CSS, images, etc.). When we're deploying a new version of our app, there will be a gap window where both the old and new images are available. This could result in a situation where a user first reaches the old container, then to download the assets, it ends up reaching the new container, so they'd get a 404 on that JavaScript file you changed, for instance.
To avoid issues like that, Kamal will extract the build assets on your configured `asset_path` location to a volume that is shared between the old and new containers, so all assets will be kept there and will be available to old and new containers during deployment.
To avoid issues like that, Kamal will extract the build assets on your configured `asset_path` location to a volume that is shared between the old and new containers, so all assets will be kept there and will be available to old and new containers during deployment.
Envs and Secrets
You may notice we're specifying most of our configs in plaintext as env tags. Pretty much all of those aren't sensitive info, so we can keep them in plaintext like that in the config. However, some configs are sensitive, like passwords or the app key, for instance. For those, we can use secrets.
Secrets are kept separately from our configs in the `.kamal/secrets` file. In this file, we can instruct Kamal on how it can build the secrets. We SHOULD NOT, however, put our secrets there in plain text. Instead, we should either fetch it from environment variables in our deploy machine or use some of the supported password managers' CLI tools to build and extract them from a shared Vault.
Here's an example of this application's secrets file:
Secrets are kept separately from our configs in the `.kamal/secrets` file. In this file, we can instruct Kamal on how it can build the secrets. We SHOULD NOT, however, put our secrets there in plain text. Instead, we should either fetch it from environment variables in our deploy machine or use some of the supported password managers' CLI tools to build and extract them from a shared Vault.
Here's an example of this application's secrets file:
KAMAL_REGISTRY_PASSWORD="${KAMAL_REGISTRY_PASSWORD}" APP_KEY="${KAMAL_PRODUCTION_CHIRPER_APP_KEY}"
In this example, we're mapping some environment variables that must be available in our deploy machine to keep it simple. But you should probably use a password manager.
Preparing the Servers
There's not much we need to do on the servers to prep them for hosting our container apps, to be honest. If you have SSH as root (or another user with root permission) via SSH keys and have Ubuntu installed on the servers, you're good to go. Provisioning servers and making sure they're secure and not exposing things they shouldn't is not a responsibility of Kamal, you should handle that.
Some tips on setting up a new server:
Some tips on setting up a new server:
- Lock it down. SSH only via SSH keys, not passwords. And keep your keys safe (don't move them around)
- Install fail2ban to prevent brute-force attacks
- Use a firewall. Make sure you only expose the ports you need on your hosts. That usually means port 22 on all hosts (SSH), and ports 80 and 443 on the web hosts (HTTP and HTTPS). Typically, those are the only things you need to expose unless you're doing something special. In that case, you know what you're doing, so take care of those exposed ports and services.
- Keep your apps up-to-date. That means regularly updating the language (Docker image), frameworks, libs, etc.
Deploying the App
With those concepts in place, let's deploy our app for the first time. Since it's our first deployment, we'll use a dedicated setup command instead of the regular deploy one.
kamal setup
You should see all the steps and commands Kamal executed, both locally and on your remote hosts.
Since we're deploying accessories (Redis in this example), Kamal will start with those. Notice that we're not exposing any ports on this accessory. That's because it's running on the same host as our app containers, and Kamal will put everything on the same Docker network, so we don't have to expose it. If we were running accessories on dedicated hosts, we'd have to expose them, and then make sure the Firewall is configured to only allow the app hosts to access the exposed ports, and also use strong passwords, etc.
When setting up accessories, Kamal will generate a hostname for it using the `${service_name}-${accessory_name}` pattern, so we can use the hostname `chirper-redis` in our example here (that's what we've configured in the `REDIS_HOST` env.)
For subsequent deploys, we can use the `kamal redeploy` command, if we don't have new hosts, or the regular `kamal deploy` command if you want to add new hosts eventually (the former will skip the step that makes sure Docker is installed to speed things up a bit).
Additionally, if you're only deploying config changes (changes to secrets or the deploy.yml), you can use the `--skip-push` flag, and Kamal will not build a new image version instead it should use the previous image (which was tagged with your most recent Git commit hash.)
Once we have it deployed, the app should be live and functioning. It shouldn't take long. Redeploys usually take a few seconds when we don't have to rebuild the images, for instance. And even when we do, we're relying on Docker image layers here, so most of the steps should be cached anyway, which means it should also be quick (we're talking ~1m here, depending on how you built your image.)
Aliases
Sometimes you'll need to get a shell inside the remote container. We can do so with Kamal with commands like:
kamal app exec --interactive --reuse "sh"
Or get a Tinker (Laravel's REPL) session to figure something out, which we can do with a command like:
kamal app exec --interactive --reuse "php artisan tinker"
To avoid having to retype all of that every time, Kamal allows us to define aliases. In our case, we can get a shell with:
kamal shell
And start a Tinker session with:
kamal tinker
That's based on the aliases defined in the `aliases:` key of our config.
Wrapping Up
That's it! We have our Laravel application deployed using Kamal! Now you can access it in the browser using your domain. It should have SSL and everything.
You can deploy with Kamal locally or set up a GitHub Actions workflow to automatically deploy your apps with Kamal if you want to.
If you have tried other container orchestration tools before, you can see how lightweight Kamal is compared to those. That is Kamal's superpower to me. I'm using it more and more on my projects. So simple.
You can deploy with Kamal locally or set up a GitHub Actions workflow to automatically deploy your apps with Kamal if you want to.
If you have tried other container orchestration tools before, you can see how lightweight Kamal is compared to those. That is Kamal's superpower to me. I'm using it more and more on my projects. So simple.