Tips on Using Docker

Amirhosein Zolfaghari

April 12, 2019

Share on:
Docker Logo

In this article I’m going to share my experiences with using docker in both development and production mode. For reading this article you’ll need to know about docker itself, how it works and run some docker commands before.

If you have never heard about docker or didn’t get a chance to work with it, please read the Get Started with Docker first and then continue reading this article.

I’m going to talk about 5 tips in using docker. These things came from the real problems which I face while using docker in my company.

Do not store data inside a container

One of the most rookie mistake in using docker is that you store your data inside a container. When a container is removed all of it will be gone forever.

For instance you create a mariadb container and named it db as a database.

docker run --name my_database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d -p 33306:3306 mariadb:10.2

Now you could connect your application to use this mariadb instance with 33306 port on your local host and start storing your application data into it. But there is a critical point here, if you stop and remove this container you don’t have any of your data anymore.

How could I keep my data?

Docker has this great option called Volumes which let’s you store or load any data from your host machine to your container. You could set your volumes with a -v flag when running a container.

docker run --name my_database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d -p 33306:3306 -v /tmp/my_database/data:/var/lib/mysql mariadb:10.2

Now if you take a look at /tmp/my_database/data folder in your host machine, you could see all the mariadb data is there and you could stop and remove this container peacefully. If you removed your container, you could start it again with command above and all your application data is still there.

This is a great option for mapping your critical data, logs or anything else which you like to go from your host machine into a container.

Load only required files into containers

When you start using docker in your stack and dockerized your application, it means that all things were built inside the container. I copy all of my code into an image and create containers out of it.

For example this is a common Dockerfile for building a rails application.

FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY . /myapp

# Add a script to be executed every time the container starts.
COPY /usr/bin/
RUN chmod +x /usr/bin/

# Start the main process.
CMD ["rails", "server", "-b", ""]

However, there are some situations that you could not use the same container in every environments. For instance I have different database connection credentials in each environment and I don’t want to commit my production password inside the config/database.yml. In this cases, it’s very common that people mount the whole app directory into their container and have different database credentials on each environment. If there is something that you would like to go from your host machine into the container, it’s better to only mount that file not the whole directory.

This was an example to show you a case when you need to mount a file into your container, however, there is a better trick for doing this and will tell you in the next section.

Use environment variables a lot

Did you read the previous part and how we handle the database credentials in different environments?

Docker has a better option for this cases and it’s called environment variables. You could create a container from one image but with different environment variables. It’s better to put anythings which are different in your environments into environment variables. Then you could create container with different environment variables based on your environment.

Here is a rails database configuration sample file which reads credentials from environment variables.

default: &default
  pool: 50
  template: 'template0'
  adapter: 'postgresql'
  port: 5432
  timeout: 5000
  encoding: 'utf8'
  min_messages: WARNING
  username: <%= ENV["DB_USER"] %>

  <<: *default
  database: <%= ENV["DB_NAME"] %>
  host: <%= ENV["DB_HOST"] %>
  password: <%= ENV["DB_PW"] %>

  <<: *default
  host: 'postgres'
  database: 'test'
  username: 'postgres'

  <<: *default
  database: <%= ENV["DB_NAME"] %>
  host: <%= ENV["DB_HOST"] %>
  password: <%= ENV["DB_PW"] %>
  port: <%= ENV["DB_PORT"] %>

  <<: *default
  database: <%= ENV["DB_NAME"] %>
  host: <%= ENV["DB_HOST"] %>
  password: <%= ENV["DB_PW"] %>
  port: <%= ENV["DB_PORT"] %>

Now you could run your container with -e flag and pass your environment variables into the container based on your environment.

docker run --name my_app \
-e DB_NAME=production_db \
-e DB_HOST=my_database_host_address \
-e DB_PW=my_secret_password \
-e DB_PORT=my_database_password \
-d \

Open ports on right interfaces

There was a -p flag in the first command that we used for running a mariadb container. This flag lets you map the container ports into your host ports. Be aware that if you don’t set a IP, docker will open that port on all interfaces. This is a bad mistake which let hackers connect to that port and make your life awfull.

You could set the IP before the host machine port and let the docker to open that port only on that interface.

Here is the different results between these commands.

docker run --name my_database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d -p 33306:3306 -v /tmp/my_database/data:/var/lib/mysql mariadb:10.2

1ebc2f486141        mariadb:10.2        "docker-entrypoint..."   5 seconds ago       Up 5 seconds>3306/tcp   my_database

docker run --name my_database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d -p -v /tmp/my_database/data:/var/lib/mysql mariadb:10.2

34c94d0e5c5c        mariadb:10.2        "docker-entrypoint..."   10 seconds ago      Up 9 seconds>3306/tcp   my_database

The first one will let anybody connect to that port in world wide, while the second one open that port only on localhost and let you connect to it from there.

Use docker-compose

You could see as we go further, the command for running a container is getting longer and longer. You could store your final command into a shell script file, so you don’t need to type it or search it through your bash history every time you want to start a container.

There is a better option. You could use docker-compose and store all of this into a docker-compose.yml file and try to run it with a simple docker-compose up command.

What is docker-compose?

As far as your stack grows, maintaining containers and configurations between these containers will get harder. Docker provide a tool called docker-compose which lets you create a YAML file and put all of them into it.

Right now, I don’t want to get into details of docker-compose and how you could setup and use it. It’s better to read about it first if you don’t have a clue and then continue reading.

I have only one container, should I use docker-compose?

Yes, for sure.

First, now you have only one container, but when your business and stack grows you will need more containers and it’s better to have it from start. It’s much easier to add new containers into your docker-compose file later.

Secondly, if you are sure that you will have one container for life time, it’s still better to put it into a docker-compose file and avoid creating a big command. Here is an example of previous command which goes into a docker-compose file.

version: '2'
    image: my-app-image:tag
    container_name: my_app
    env_file: .env_production
      - ""
    command: bundle exec puma -b tcp://"

Now, you could run your container with a simple docker-compose up command and everything will be as same as running it with a docker run ... command.

Hope these tips will be helpful and you could find my contact in the first page so we could discuss over these if you like.

Share on:
comments powered by Disqus