In 3mdeb we use Docker heavily. Main tasks that we perform using it are:

  • firmware and embedded software building – each software in Embedded System requires little bit different building environment, configuring those development environments on your host may quickly make a mess in your system for daily use, because of that we created various containers which I enumerate below
  • trainings/workshops – when we perform trainings we don’t want to waste time for users to reconfigure the environment. In general, we have 2 choices: VM or containers. Since we use containers for building and development we prefer containers or containers in VMs while performing trainings.
  • rootfs building for infrastructure deployment – we maintain pxe-server project which helps us in firmware testing and development. In that project we have a need for custom rootfs and kernels, we decided to combine Docker and Ansible for a reliable building of that infrastructure

To list some of our repositories:

Some are actively maintained, some are not, some came from other projects, some were created from scratch, but all of them have value for Embedded Systems Developers.

Those solutions are great but we think it is very important in all those use cases to optimize performance. To clarify we have to distinguish the most time-consuming tasks in above containers:

  • code compilation – there are books about that topic, but since we use mostly Linux, we think that key factor is to have support for ccache and this is our first goal in this post
  • packages installation – even when you are using httpredir for apt you still will spent a significant amount of time installing and downloading, because of that it is very important to have locally or on server in LAN apt caching proxy like apt-cacher-ng, we will show how to use it with Docker on build and runtime

ccache

Following example will show ccache usage with xen-docker. Great post about that topic was published by Tim Potter here.

Of course, to use ccache in our container we need it installed, so make sure your Dockerfile contains that package. You can take a look at xen-docker Dockerfile.

I installed ccache on my host to control its content:

Moreover it is important to pay attention to directory structure and volumes, because we can easy end up with not working ccache. Of course clear indication that ccache works is that it show some statistics. Configuration of ccache in Docker files should look like this:

Then to run container with ccache we can pass our ~/.ccache as volume. For single-threaded compilation assuming you checked out correct code and called ./configure:

Before we start testing performance we also have to mention terminology little bit, below we use terms cold cache and hot cache, this was greatly explained on StackOverflow so I will not repeat myself. In short cold means empty and hot means that there are some values from previous runs.

Performance measures

No ccache single-threaded:

No ccache multi-threaded:

Let’s make sure ccache is empty

Cold cache:

And the stats of ccache:

Hot cache:

And the stats of ccache:

I’m not ccache expert and cannot explain all results e.g. why hit rate is so low, when we compile the same code?

To conclude, we can gain even 30% with hot cache. Biggest gain we have when using multithreading, but this highly depends on CPU, in my case I had 8 jobs run simultaneously and gain was 40% in compilation time.

apt-cacher-ng

There 2 use case for apt-cacher-ng in our workflows. One is Docker build time, which can be time-consuming since all packages and its dependencies are installed in the base image. Second is runtime, when you need some package that may have extensive dependencies e.g. xen-systema-amd64.

First, let’s setup apt-cacher-ng. Some guide may be found in Docker documentation, but we will modify it a little bit.

Ideally, we would like to use docker compose to set up apt-cacher-ng container whenever it is not set, or have dedicated VM which serves this purpose. In this post, we consider local cache. Dockerfile may look like this:

Build and run:

The output should look like this:

We should also see that cacher listens on port 3142:

Dockerfile should contain following environment variable:

Now we can run docker building with appropriate parameters:

performance measures

xen-docker container build without apt-cacher-ng. To measure what is going on during container build we are using ts from moreutils package.

Without cacher:

With cold cache:

With hot cache:

Assuming that the network conditions did not change between runs to extent of 30s delay we can conclude:

  • using cacher even with cold cache is better than nothing, it gives the speedup of about 5%
  • using hot cache can spare ~20% of normal container build time, if significant amount of that time is package installation

Of course, those numbers should be confirmed statistically.

Let’s try something more complex

Finally we can try to run much more sophisticated stuff like our debian-rootfs-builder. This code contain mostly compilation and package installation through apt-get.

Initial build statistics were quite bad:

After adding ccache with hot cache:

ccache stats:

This is not significant but we gain another 13% and now build time is reasonable. Still most time-consuming tasks belong to compilation and package installation bucket.

Summary

If you have any other ideas about optimizing code compilation or container build time please feel free to comment. If this post will gain popularity we would probably reiterate it with best advices from our readers.

If you looking for Embedded Systems DevOps, who will optimize your firmware or embedded software build environment, look no more and contact us here