Skip to content

Speed up C++ compilation, part 2: compiler cache

How to speed up C++ compilation with a compiler cache.

What’s in a cache

Caching is one of the most popular techniques for increasing performance in the computer industry. There are processor caches, disk caches, DNS caches, web caches and so on and so forth. No surprise then there’s also a compiler cache. The idea behind it is simple: take compiler input and cache the output! What could be simpler than that? Let’s think what constitutes an input and an output for a compiler cache.

OK, there are a few factors at play here. Source code is not the only thing that a C++ compiler has to take into account when producing output. Obviously, compiler flags like optimization levels, debug info and target processor, influence code generation that goes into the output. So, all the flags that can at least potentially have a say in the matter of code emission are input for a compiler cache. Other flags, for example various warning switches, have no effect on the produced code but control the type and verbosity of compiler diagnostics. This is also an output from a compiler that ought to be cached!


compiler cache diagram
Compiler cache inputs and outputs

Includes and preprocessor macros are taken care of by hashing a preprocessed file. Not all contents of a file have to be unchanged. Comments can be omitted safely, allowing a programmer for changes even in most common includes without worrying about resulting recompilation. Hashing such a file will result in an identical hash code and let the output be fetched from the cache, provided there are no other changes in inputs.

For a compiler cache, a compiler is also an input. Possibly name, version, modification time and other things are going to be hashed in order to uniquely identify a compiler that was used for the past compilations. Maybe compilers don’t change every 5 minutes in a developer’s environment but it’s crucial to make sure that outputs from different compilers are not combined together inadvertently.

When caching works

In a few cases, a compiler cache can improve compile times tremendously.  Let us say you have to switch between debug and release builds during development. Unless you have two separate, out-of-source build directories you’ll have to recompile the whole project from scratch. With a compiler cache, subsequent recompilations are going to be much shorter as usually only a handful of files are changed between builds.

Another great use case for compiler cache is a build server. In the world of continuous integration, changes are frequently sent to a CI server like Jenkins to be built and tested. Frequent and small changes mean that there are only minor differences in generated object files between builds. When CI server is configured to use a compiler cache the result can be observed in shorter build times and quicker feedback to developers.

Free compiler caches

One of the first caches for C and C++ was a shell script written by Erik Thiele [1]. It became an inspiration for another tool – ccache [2]. Nowadays, it’s the most popular, free compiler cache for gcc and clang. It’s fairly easy to setup and use. According to the manual, there are two ways: intrusive or non-intrusive. Intrusive means creating symlinks to ccache with the same names as compiler invocations and prepending them to PATH. The non-intrusive way is to add ccache before each compilation command. This can be easily achieved in CMake (>2.8.0) with the following excerpt [3]:

find_program(CCACHE_FOUND ccache)

As for performance gains, I’ll resort to ccache’s own measurements. The cache can work in two modes. In the direct mode the cache hashes the source code directly and in preprocessor mode it hashes a preprocessor output from a compiler. Each mode has its own caveats so be sure to check the documentation before choosing one over another. Here are the results of compiling Samba 3.5.3 on their reference machine:

Elapsed time Percent Factor
Without ccache 316.23 s 100.00 % 1.0000 x
ccache 3.0 direct, first time 375.16 s 118.64 % 0.8429 x
ccache 3.0 direct, second time 32.09 s 10.15 % 9.8545 x
ccache 3.0 prepr., first time 360.62 s 114.04 % 0.8769 x
ccache 3.0 prepr., second time 161.44 s 51.05 % 1.9588 x

Almost 10x speedup in the second run in direct mode. Pretty neat!

For Visual Studio users, there is a clcache script [4], that was heavily inspired by ccache and works in a similar way.

Proprietary solutions

The tools described above work alongside the compiler and aim to relieve it of superfluous work. Zapcc [5] takes a different approach and builds upon clang to create a compatible compiler with in-memory cache.  As I understood from the discussion on cfe-dev mailing list [6], the compiler works in a server-client manner. The server (zapccs) stays in memory and manages its cache while listening to compiler commands from the client (zapcc). The cache stores things like already parsed headers, template instantiations (!) and generated code. This is really cool and I hope zapcc

This is really cool and I hope zapcc will pass through beta version and become a successful project. As for performance gains, I’ll cite the project’s FAQ:

It can range from no acceleration at all for plain C projects to x2-x5 for build-all of heavily templated projects, up to cases of x50 speedups in developer-build incremental change of one file.

Zapcc is free for non-commercial projects so if you run an OSS project you may want to give it a spin!


Compiler caches are easy to integrate with contemporary compilers and can provide a significant boost to compile times. There exist both free and proprietary solutions to choose from that offer various approaches to caching.

References and further reading

[1] Compilercache by Erik Thiele

[2] ccache project page

[3] How to use ccache with CMake

[4] clcache project page

[5] zapcc homepage

[6] Yaron Keren on zapcc technicals

[7] Speed up C++ compilation, part 1: precompiled headers

Header photo “Army Racing pit stop” by The U.S. Army, available under Creative Commons Attribution license.


  1. redvis redvis

    Nice article, thanks! It is worth mentioning that when you need to clean ccache you can run ‘make all CCACHE_RECACHE=1’ 🙂

  2. js js

    Another option is Stashed which is easier to setup than ccache when using Visual Studio.

  3. For CI use cases you might find sccache interesting:

    We wrote it at Mozilla for use in Firefox CI, where we do hundreds or thousands of builds a day, where many of them are executing at the same time (because developers push new code before builds for the previous push have finished) and our build machines are ephemeral EC2 instances. It was originally implemented in Python but I rewrote it in Rust a few years ago which has the benefit of making it easy to deploy (you can get prebuilt mostly standalone binaries from the GitHub releases). We use an S3 bucket as the cache store, which is pretty fast when building in EC2, but people have contributed other cache backends as well (like Redis and Google Cloud Storage).

    FWIW, sccache is not quite as good as ccache for local development because it only implements the equivalent of ccache’s preprocessor mode, not its direct mode, so sccache always runs the C preprocessor on the input sources.

Leave a Reply

Your email address will not be published. Required fields are marked *