Using Docker

You can also make use of Docker in Scrutinizer's build environment. Simply enable it by adding the following to your configuration:

build:
  nodes:
    node-that-needs-docker:
      environment:
        docker: true

This will automatically start the Docker daemon and install a recent version of Docker Compose.

Extended Compatibility Mode (Remote Engine)

By default, Docker runs inside the build container. As a result, some syscalls are restricted, and depending on how you use Docker, you might run into errors like open /proc/####/environ: permission denied, failed to register layer or general permission/operation denied errors.

For these cases, Scrutinizer provides a separate environment with a Docker engine running that provides un-restricted syscalls. Simply, set the remote_engine flag to true in your configuration:

build:
  nodes:
    node-that-needs-docker:
      environment:
        docker:
          remote_engine: true

Accessing Exposed Ports

Any ports that are exposed are not exposed on localhost as the Docker engine is not running inside the local build environment, but are exposed in the remote host. You can access services exposed there using the $DOCKER_IP environment variable.

Volume Limitations and Solutions

When using volumes with remote_engine enabled, those volumes are not referring to the local build environment, but to the host that the Docker engine is running in. As such, you can't mount a local repository folder if remote engine is enabled. This behavior is the same as if you were running Docker via Docker Machine as is common on Mac OS X.

So, if you run a command like docker run -v ./some-file:/image-path/some-file my-base-image some-command, this would not work when the remote engine is enabled. To overcome this, simply create an image where you copy the file to the desired location. For the example above, the Dockerfile can look like this:

FROM my-base-image

COPY ./some-file /image-path/some-file

You can then build the image in the build environment docker build -t my-build-image -f Dockerfile ., and after that run your original command docker run my-build-image some-command.

Defining Registry Logins

If you want to push images or pull private images from a registry, you can define logins in the configuration:

build:
  nodes:
    node-that-needs-docker:
      environment:
        docker:
          logins:
            - { username: "my-user", password: "my-password" } # DockerHub
            - { username: "another-user", password: "some-pass", server: "quay.io" }

Caching Images

If you would like to cache some image layers which are expensive to compute you can make use of the docker save and docker load commands. Here are some examples:

Tip: The cache is usually faster for expensive computation layers. If the layer just needs to be downloaded, it is usually faster not to cache the layer, but just download it on each build.
build:
  nodes:
    docker-build:
      environment:
        # caching works with the remote_engine and the local executor
        docker: true

        # This example uses a minimal node, but it works equally fine with a auto-setup node.
        commands:
          - command: restore-from-cache repository docker-layers - | docker load
            only_if: exists-in-cache repository docker-layers

          - # ... your commands here

          # see "docker save --help" for syntax
          - docker save YOUR_IMAGE ANOTHER_IMAGE ... | store-in-cache repository docker-layers -