Best practices for Improving Docker Performance

By Staff Contributor on April 3, 2023

Docker Basics

Using Docker containers is one of the most popular ways to build modern software these days. This is due to the way containers work—they’re really fast. Containers usually start in a few seconds and deploying newer versions of a container doesn’t take much longer. But if you’re looking for maximum performance, where every second counts, you’ll need to learn how to improve Docker performance even more.

A few general and easy-to-implement performance tips can be applied pretty much universally. They don’t typically require many changes to your containers, so it’s an easy win. Other, more advanced options require a bit more effort—they can’t be applied to every setup, but they bring even more advantages. We’ll cover both of these here.

You should keep in mind, however, that the performance of your application running inside the container isn’t really influenced by Docker itself. Most of the performance improvements to Docker improve container build and startup time only. So, if you suffer from long application startup or restarts, it can be either Docker or the application itself that loads for a long time. Therefore, it’s important to have a good monitoring system, which can help you find where your performance is lacking. So, let’s dive into Docker containers.

Docker Base Image

The first, and the easiest, way to improve your container build and startup time is to use a slim Docker base image. Let me briefly explain how Docker containers work to better understand this. Every Dockerfile (which is a definition of your container) starts with the keyword “FROM.” This instruction tells Docker which base image to use to build your container. Everything you want to have in your container will be added “on top” of the base Docker image. So, if you use a large base image, you’ll end up with a big container. If you use a small base image, however, you’ll get a very small container. So, the size of your final image will depend not only on how much you put inside, but also which base image you use. Let me give you an example. If you’re developing a node.js-based application, you’re probably using an official node image (FROM node). Image node-alpine, however, is 9x smaller. This significant difference can make your builds faster. Usually, lightweight images are tagged :alpine or :slim.

Docker Image Caching

Another easy performance improvement can be achieved by preloading used Docker images on the machine. Whenever you execute docker run or docker build, the first thing Docker does is look to see if the specified image is already downloaded on the machine. If not, Docker will contact DockerHub (or another registry, if specified) and will attempt to download it. So, easy performance gains can be achieved by making sure the machine already has the necessary images. Maybe it’s won’t make a huge difference on your local machine since Docker, after downloading the image, will automatically save it on your machine. But in clustered systems or CI/CD pipelines, it can make a huge difference. Imagine every time you run your pipeline, Docker attempts to download multiple images (which then get lost after the CI/CD pipeline is finished). By redesigning your CI/CD so images are preloaded on the machine, you can save a lot of time.

Dockerfile Instructions Chaining

Before I give you the next tip, let me explain what happens with every new instruction you put into Dockerfile. I explained the base image in a previous section. On top of the base image you probably want to install some software, add some files, configure some parameters, and so on. Every instruction in the Dockerfile, however, creates a new layer on top of the base Docker image. This means, ideally, you’d want to have as few layers as possible. The easiest way to decrease the number of layers is to chain similar instructions. For example, if you need to install curl, wget, and git, instead of doing this:

RUN apt update
RUN apk install -y curl
RUN apk install -y wget
RUN apk install -y git

You can use one RUN instruction:

RUN apk update && apk install -y curl wget git

See? You can achieve exactly the same outcome, but with one Docker layer instead of four.

Instructions Order

For the next tip, let me remind you what happens when you change something in your Dockerfile. Whenever you do this, of course, you need to rebuild your Docker image. However, depending where your change is, Docker will rebuild either the whole image or only a small part of it. Basically, Docker will reuse layers that don’t need to be changed and rebuild only the part after (and including) the changed layer. Therefore, if you order your instructions strategically, you’ll save yourself a lot of time.

What does it mean to order instructions strategically? It means you should put instructions you most likely won’t change often at the beginning of the Dockerfile and those you’ll change more often at the end. This way, you increase the number of layers Docker can reuse. If you place the ADD instruction before the RUN instruction, every time you change your index.html, Docker will have to rebuild the whole image (and run time-consuming apt-get update and apt-get install). If you place it near the end, however, Docker will be able to use a cached layer with an already executed RUN command. Therefore, your build will take significantly less time.

Multi-Stage Build

You can take your Docker images to the next level with Docker multi-stage builds. It’s a way of building a Docker image that allows you to separate the building (and testing, if needed) stage from the final image. In other words, you can define all the necessary actions (like building the actual artifact, running unit tests or security scanning, or anything you may need) in a few stages. Then, when everything is built and tested, you instruct Docker to take only the final artifact for your application and build the final image with it. So instead of ending up with a big Docker image with all the build software and testing tools, your final image will be very slim, containing only the necessary software to run it.

Use an Orchestration Tool

Let’s say you implemented all the tips, and your Docker images are nice and slim. What do you do now to optimally run your containers? Most modern applications consist of more than one container, and it’s not easy to manage dozens of containers and make sure they use server resources optimally (meaning one container isn’t eating all the RAM, not giving other containers much room). Use an orchestration tool for this. Tools like Kubernetes can manage this for you, helping you keep your resources under control.

What’s Next?

As I mentioned in the beginning, if you’re looking to improve the performance of your application running in a Docker container, you should have a good understanding of what has the biggest impact on performance. That’s because it could be Docker, but it also could be your application itself. You need a tool designed to give you insights and metrics from all layers (infrastructure, Docker, and application). SolarWinds® Observability can do this for you. With the help of deep container monitoring, Observability can help you find exactly where optimizations are needed.


This post was written by Dawid Ziolkowski. Dawid has 10 years of experience as a Network/System Engineer at the beginning, DevOps in between, Cloud Native Engineer recently. He’s worked for an IT outsourcing company, a research institute, telco, a hosting company, and a consultancy company, so he’s gathered a lot of knowledge from different perspectives. Nowadays he’s helping companies move to cloud and/or redesign their infrastructure for a more Cloud-Native approach.

Related Posts