Wrangling Complex Rails apps with Docker Part 1 : The Dockerfile

One of the most difficult things as a developer when working on a team is onboarding. The process usually goes something like this: Look at the README, install dependencies, realize that something is missing or out of date, ask another developer for help, install more undocumented dependencies, and repeat until you have the application running. There has to be a better way!

Docker makes setting up and tearing down development environments simple. A Dockerfile gives the developer a sourced, repeatable, replaceable and declarative way to define their app and dependencies. No more going back and forth between an outdated README and leaning on other developers for setup instructions. Just clone your repo, run a few commands, and you're ready to go! Docker also solves the problem of "works on my machine, but not on the server", since all of the dependencies are encapsulated in the application's Dockerfile.

Part of the magic of developing Rails apps is that it's super simple to get started. Typically we'd install Ruby, install the Bundler and Rails gems, run rails new myapp and we're ready to create the next big thing. We always start simple. With a dream and amazing tools… Then reality sets in:

  • We need caching. Install Redis, and/or Memcache.
  • We need someplace to store our data. No problem, let's install PostgreSQL.
  • What about search? Ah, let's add Elasticsearch too.
  • Background Jobs? We have Sidekiq for that…

And the list goes on and on. Our dream of simple development with Rails becomes clouded by a seemingly endless list of moving parts and dependencies. We're not only tasked with keeping our application, app dependencies, and test suite up to date but also our external dependencies and services.

As developers, quite often we're asked to join projects which rely on many external services. Often a very specific, or outdated version of a service absolutely HAS to be installed. We can use Homebrew or other package managers to install these services and pin them to specific versions, but what happens when we switch projects? We're left with a mess of a system and have the need to keep a few versions of Postgres, Elasticsearch, Redis, and other services around just to satisfy project dependencies. Then we have the issue of software that's compiled against shared libraries for different versions of the services.

There are a couple of ways of managing this issue. One of them involves the pinning and linking strategy mentioned above. Another, better, way to do this is to Dockerize your Rails application and dependent services. Adding Docker to your development workflow can be daunting at first but I'll show you how to get started.

When you're finished reading this blog series, you'll know how to run Rails, Webpacker, external services, databases, and even tests in Docker without polluting your system with several versions of software cruft. Bootstrapping a project will involve running a command or script, and purging the project from your system when done is as simple as deleting a few docker images and volumes.

Feel free to have a look at my example Rails/Docker app repo if you would like to see a concrete example to follow along with:

Let's get started.

Prerequisites

This is an intermediate-level guide, so therefore some knowledge of docker and docker-compose is required. In the interest of keeping things simple, I'm going to assume that you already know how to install docker, run containers and have a working knowledge of docker and docker-compose. It's helpful to read the Docker overview and Docker best practices as a refresher if needed.

Creating the Application Dockerfile

In order to create a Rails application in docker, it's important to select the proper base image.

If you're dockerizing an existing application, the best thing to do is to make note of which version of ruby you're using in the app. If you're creating a new app, I suggest finding the latest version of Ruby that works with your Rails version. Head on over to Docker Hub's Ruby section and choose an image.

There are several tags (variants) of each ruby version:

  • ruby:<version> (the default Debian-based version)
  • ruby:<version>-slim (a barebones Debian version)
  • ruby:<version>-alpine (Alpine Linux based image optimized for small size)

Read the "Image Variants" section on Ruby's docker hub page to learn more. I personally lean toward the slim variant, as it's small and Debian is more compatible with dependencies that you may need to install, The bulk of the size of our image is usually in rubygems and other app dependencies anyway. Feel free to experiment with the alpine variant if you're feeling adventurous. Note that Alpine based images use a different libc implementation than most other Linux distributions use. If you have to put a precompiled binary into your docker image, it will not work if it was compiled for glibc (the most common libc for most Linux distributions). The solution is to use a non-alpine variant, or see if there's an Alpine package for your binary.

Things to keep in mind when creating a Dockerfile

  • Each RUN instruction will create a new layer in the docker image. Try to combine commands into a single RUN instruction, as it's better to have less layers. Historically, Docker images were more performant with less layers but that's not the case anymore. Alas, it's still a best practice to keep the number of layers to a minimum.
  • Set default environment variables with ENV, and build arguments with ARG in order to make build time and run time variable dependencies explicit.
  • COPY your Gemfile and Gemfile.lock files after your app dependencies are installed. This leverages the build cache and will make rebuilds much faster.
  • RUN instructions can be executed conditionally based on environment variables. This is great for precompiling assets or doing other things for production builds.
  • Set up a .dockerignore file to keep unwanted parts of our application out of the final image. logs and tmp/* are good candidates for dockerignore.

Our goal in creating an efficient Dockerfile will be to create as few layers as possible, and to leverage the build cache whenever possible for faster rebuilds.

Here's my annotated Dockerfile. Feel free to borrow and modify it:

FROM ruby:3.0.0-slim as ruby-base

# Default env vars (applies to containers made from this image)
# Can be overriden at run-time with -e
ENV APP_DIR=/app
# Sets the path to allow running bundler binstubs
ENV PATH="${PATH}:${APP_DIR}/bin"
ENV BUNDLE_PATH=/bundle/vendor
ENV NODE_VERSION 14.15.4
ENV YARN_VERSION 1.22.5

# Build args - shell variables assigned at build time.
# can be overridden at build time with --build-arg
ARG BUNDLER_VERSION=2.2.8
ARG RAILS_ENV=development
ARG BUNDLE_PATH=/bundle/vendor

# Here are the dependencies we need to build our app (and install Rails)
# This example installs the PostgreSQL and SQLite libraries (two commonly used databases in Rails apps).
#
# We're also installing the latest nodejs and yarn packages here for webpacker.
RUN apt-get update -qq && apt-get install -yq curl gnupg2 lsb-release \
    && curl -sL https://deb.nodesource.com/setup_14.x | bash \
    && curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
    && echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list

RUN apt-get update -qq &&  apt-get install -yq build-essential git ruby-dev libpq-dev \
    libsqlite3-dev postgresql-client nodejs shared-mime-info \
    && apt-get install -yq yarn \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN gem install bundler -v $BUNDLER_VERSION

# This creates the APP_DIR that we defined earler 
# and sets it as the default directory in the container created from this image
RUN mkdir ${APP_DIR}
WORKDIR ${APP_DIR}

#
# Here, we copy our gem and npm dependency files into our image.
# We do this here so that when we change dependencies and a rebuild is needed, we can 
# leverage the build cache (everything above this point will not be rebuilt).
#
COPY Gemfile Gemfile.lock ./
RUN bundle check || bundle install

COPY package.json yarn.lock ./
RUN yarn install

# Copy all of our app in to the image (use .dockerignore to omit patterns)
COPY . ./

# Declares that we intend to listen on port 3000. This is a declarative documentation instruction
# that doesn't actually publish or open a port.
EXPOSE 3000

# This gets executed when we run a container made from this image
CMD ["bin/rails", "server", "-b", "0.0.0.0"]

Here we have our Dockerfile for a Rails application using PostgreSQL and Webpacker.

If you're feeling adventurous, this Dockerfile can be saved and built into a docker image using the following command:

docker build  -t test-rails-image .

We can invoke rails like so:

docker run --rm test-rails-image rails --help

If you ever need to install or update a gem or invoke a Rails generator, it can be done through Docker: Note: This is easier to do with docker-compose and will be covered in the next post in this series.

# $(pwd) will mount the current directory into app as a volume
docker run --rm -v $(pwd):/app test-rails-image rails generate model Foo
docker run --rm -v $(pwd):/app test-rails-image bundle add -g test  rspec

When you're finished playing with your new Rails docker image, you can delete it:

docker rmi test-rails-image

This image will not do much on its own without supporting services. In the next post, we'll orchestrate a system with Docker Compose, defining the services that we need in order to run a full-fledged Rails application within Docker

Next: Wrangling Complex Rails apps with Docker Part 2: Creating a docker-compose configuration

Image Attributions

Photo by Guillaume Bolduc on Unsplash

Posts in this series

  • Wrangling Complex Rails apps with Docker Part 1 : The Dockerfile
  • Wrangling Complex Rails apps with Docker Part 2 : Creating a docker-compose configuration

  • Category: Development
    Tags: Rails, Docker, Devops