maelvls dev blog

maelvls dev blog

Systems software engineer. I write mostly about Kubernetes and Go. About

03 Jul2020

Pull-through Docker registry on Kind clusters

cross-post

Note: (August 2023) As mentioned by Wolfgang Schnerring in this comment, the method presented in this document is very limited: many images are still being pulled directly and it only proxies a single “upstream” registry. Nowadays, docker-registry-proxy is a better alternative: https://github.com/rpardini/docker-registry-proxy#kind-cluster.

TL;DR:

  • to create a local pull-through registry to speed up image pulling in a Kind cluster, run:

    docker run -d --name proxy --restart=always --net=kind \
      -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io registry:2
    kind create cluster --config /dev/stdin <<EOF
    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    containerdConfigPatches:
      - |-
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["http://proxy:5000"]
    EOF
    
  • you can’t use this pull-through proxy registry to push your own images (e.g. to speed up Tilt builds), but you can create two registries (one for caching, the other for local images). See this section for more context; the lines are:

    docker run -d --name proxy --restart=always --net=kind \
      -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io registry:2
    docker run -d --name registry --restart=always --net=kind \
      -p 5000:5000 registry:2
    kind create cluster --config /dev/stdin <<EOF
    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    containerdConfigPatches:
      - |-
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["http://proxy:5000"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
          endpoint = ["http://registry:5000"]
    EOF
    
  • in case you often create & delete Kind clusters, using a local registry that serves as a proxy avoids redundant downloads

  • KIND_EXPERIMENTAL_DOCKER_NETWORK is useful but remember that the default network (bridge) doesn’t have DNS resolution for container hostnames

  • the Docker default network (bridge) has limitations as detailed by Docker.

  • If you play with ClusterAPI with its Docker provider, you might not be able to use a local registry due to the clusters being created on the default network, which means the “proxy” hostname won’t be resolved (but we could work around that).


Kind is an awesome tool that allows you to spin up local Kubernetes clusters locally in seconds. It is perfect for Kubernetes developers or anyone who wants to play with controllers.

One thing I hate about Kind is that images are not cached between two Kind containers. Even worse: when deleting and re-creating a cluster, all the downloaded images disappear.

In this post, I detail my discoveries around local registries and why the default Docker network is a trap.

Contents:

  1. Kind has no image caching mechanism
  2. Creating a caching proxy registry
  3. Creating a Kind cluster that knows about this caching proxy registry
  4. Check that the caching proxy registry works
  5. Docker proxy vs. local registry
  6. Improving the ClusterAPI docker provider to use a given network

Kind has no image caching mechanism

Whenever I re-create a Kind cluster and try to install ClusterAPI, all the (quite heavy) images have to be re-downloaded. Just take a look at all the images that get re-downloaded:

# That's the cluster created using 'kind create cluster'
% docker exec -it kind-control-plane crictl images
IMAGE                                                                      TAG      SIZE
quay.io/jetstack/cert-manager-cainjector                                   v0.11.0  11.1MB
quay.io/jetstack/cert-manager-controller                                   v0.11.0  14MB
quay.io/jetstack/cert-manager-webhook                                      v0.11.0  14.3MB
us.gcr.io/k8s-staging-capi-docker/capd-manager/capd-manager-amd64          dev      53.5MB
us.gcr.io/k8s-artifacts-prod/cluster-api/cluster-api-controller            v0.3.0   20.3MB
us.gcr.io/k8s-artifacts-prod/cluster-api/kubeadm-bootstrap-controller      v0.3.0   19.6MB
us.gcr.io/k8s-artifacts-prod/cluster-api/kubeadm-control-plane-controller  v0.3.0   21.1MB

# I also use a ClusterAPI-created cluster (relying on CAPD):
% docker exec -it capd-capd-control-plane-l4tx7 crictl images ls
docker.io/calico/cni                  v3.12.2             8b42391a46731       77.5MB
docker.io/calico/kube-controllers     v3.12.2             5ca01eb356b9a       23.1MB
docker.io/calico/node                 v3.12.2             4d501404ee9fa       89.7MB
docker.io/calico/pod2daemon-flexvol   v3.12.2             2abcc890ae54f       37.5MB
docker.io/metallb/controller          v0.9.3              4715cbeb69289       17.1MB
docker.io/metallb/speaker             v0.9.3              f241be9dae666       19.2MB

That’s a total of 418 MB that get re-downloaded every time I restart both clusters!

Unfortunately, there is no way to re-use the image registry built into your default Docker engine (both on Linux and on macOS). One solution to this problem is to spin up an intermediary Docker registry in a side container; as long as this container exists, all the images that have already been downloaded once can be served from cache.

Creating a caching proxy registry

We want to create a registry with a simple Kind cluster; let’s start with the registry:

docker run -d --name proxy --restart=always --net=kind -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io registry:2

Details:

  • --net kind is required because Kind creates its containers in a separate network; it does that the because the “bridge” has limitations and doesn’t allow you to use container names as DNS names:

    By default, a container inherits the DNS settings of the host, as defined in the /etc/resolv.conf configuration file. Containers that use the default bridge network get a copy of this file, whereas containers that use a custom network use Docker’s embedded DNS server, which forwards external DNS lookups to the DNS servers configured on the host.

    which means that the container runtime (containerd) that runs our Kind cluster won’t be able to resove the address proxy:5000.

  • REGISTRY_PROXY_REMOTEURL is required due to the fact that by default, the registry won’t forward requests. It simply tries to find the image in /var/lib/registry/docker/registry/v2/repositories and returns 404 if it doesn’t find it.

    Using the pull-through feature (I call it “caching proxy”), the registry will proxy all requests coming from all mirror prefixes and cache the blobs and manifests locally. To enable this feature, we set REGISTRY_PROXY_REMOTEURL.

    Other interesting bit about REGISTRY_PROXY_REMOTEURL: this environement variable name is mapped from the registry YAML config API. The variable

    REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io
    

    is equivalent to the following YAML config:

    proxy:
      remoteurl: https://registry-1.docker.io
    

    ⚠️ The registry can’t be both in normal mode (“local proxy”) and in caching proxy mode at the same time, see below.

Creating a Kind cluster that knows about this caching proxy registry

The second step is to create a Kind cluster and tell the container runtime to use a specific registry; here is the command to create it:

kind create cluster --config /dev/stdin <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
  - |-
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
      endpoint = ["http://proxy:5000"]
EOF

Note: containerdConfigPatches is a way to semantically patch /etc/containerd/config.conf. By default, this file looks like:

% docker exec -it kind-control-plane cat /etc/containerd/config.toml
[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]

Note 2: the mirror prefix (docker.io) can be omitted for images stored on Docker Hub. For other registries such as gcr.io, this mirror prefix has to be given. Here is a table with some examples of image names that are first prepended with “docker.io” if the mirror prefix is not present, and we get the final address by mapping these mirror prefixes with mirror entries:

image name“actual” image nameregistry address w.r.t. mirrors
alpinedocker.io/alpinehttps://registry-1.docker.io/v2/library/alpine/manifests/latest
gcr.io/istio-release/pilotgcr.io/istio-release/pilothttps://gcr.io/v2/istio-release/pilot/manifests/1.9.1
foo.org/something/someimagefoo.org/something/someimagehttps://foo.org/v2/something/someimage/manifests/latest

Check that the caching proxy registry works

Let’s see if the proxy registry works by running a pod:

% kubectl run foo -it --rm --image=nicolaka/netshoot
% docker exec -it proxy ls /var/lib/registry/docker/registry/v2/repositories
nicolaka

We can also see through the registry logs that everything is going well:

# docker logs proxy | tail
time="2020-07-26T14:52:44.2624761Z" level=info msg="Challenge established with upstream : {https registry-1.docker.io /v2/}" go.version=go1.11.2 http.request.host="proxy:5000" http.request.id=15e9ac86-7d79-4883-a8ce-861a7484887c http.request.method=HEAD http.request.remoteaddr="172.18.0.2:57588" http.request.uri="/v2/nicolaka/netshoot/manifests/latest" http.request.useragent="containerd/v1.4.0-beta.1-34-g49b0743c" vars.name="nicolaka/netshoot" vars.reference=latest
time="2020-07-26T14:52:45.4195817Z" level=info msg="Adding new scheduler entry for nicolaka/netshoot@sha256:04786602e5a9463f40da65aea06fe5a825425c7df53b307daa21f828cfe40bf8 with ttl=167h59m59.9999793s" go.version=go1.11.2 instance.id=ba959eb9-2fa3-47c0-beb7-91480c8a31ee service=registry version=v2.7.1
172.18.0.2 - - [26/Jul/2020:14:52:43 +0000] "HEAD /v2/nicolaka/netshoot/manifests/latest HTTP/1.1" 200 1999 "" "containerd/v1.4.0-beta.1-34-g49b0743c"
time="2020-07-26T14:52:45.4204299Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="proxy:5000" http.request.id=15e9ac86-7d79-4883-a8ce-861a7484887c http.request.method=HEAD http.request.remoteaddr="172.18.0.2:57588" http.request.uri="/v2/nicolaka/netshoot/manifests/latest" http.request.useragent="containerd/v1.4.0-beta.1-34-g49b0743c" http.response.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.response.duration=1.6697112s http.response.status=200 http.response.written=1999

Docker proxy vs. local registry

A bit later, I discovered that you can’t push to a proxy registry. Tilt is a tool I use to ease the process of developping in a containerized environment (and it works best with Kubernetes); it relies on a local registry in order to cache build containers even when restarting the Kind cluster.

Either the registry is used as a “local registry” (where you can push images), or it is used as a pull-through proxy. So instead of configuring one single “proxy” registry, I configure two registries: one for local images, one for caching.

docker run -d --name proxy --restart=always --net=kind -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io registry:2
docker run -d --name registry --restart=always -p 5000:5000 --net=kind registry:2
kind create cluster --config /dev/stdin <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
  - |-
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
      endpoint = ["http://proxy:5000"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
      endpoint = ["http://registry:5000"]
EOF

Note that we do use a port-forwarding proxy (-p 5000:5000) so that we can push images “from the host”, e.g.:

% docker tag alpine localhost:5000/alpine
% docker push localhost:5000/alpine
The push refers to repository [localhost:5000/alpine]
50644c29ef5a: Pushed
latest: digest: sha256:a15790640a6690aa1730c38cf0a440e2aa44aaca9b0e8931a9f2b0d7cc90fd65 size: 528

# Let's see if this image is also available from the cluster:
% docker exec -it kind-control-plane crictl pull localhost:5000/alpine
Image is up to date for sha256:a24bb4013296f61e89ba57005a7b3e52274d8edd3ae2077d04395f806b63d83e

If you use Tilt, you might also want to tell Tilt that it can use the local registry. I find it a bit weird to have to set an annotation (hidden Tilt API?) but whatever. If you set this:

kind get nodes | xargs -L1 -I% kubectl annotate node % tilt.dev/registry=localhost:5000 --overwrite

then Tilt will use docker push localhost:5000/you-image (from your host, not from the cluster container) in order to speed up things. Note that there is a proposal (KEP 1755) that aims at standardizing the discovery of local registries using a configmap. Tilt already supports it, so you may use it!

Improving the ClusterAPI docker provider to use a given network

When I play with ClusterAPI, I usually use the CAPD provider (ClusterAPI Provider Docker). This provider is kept in-tree inside the cluster-api projet.

I want to use the caching mechanism presented above. But to do that, I need to make sure the clusters created by CAPD are not created on the default network (current implementation creates CAPD clusters on the default “bridge” network).

I want to be able to customize the network on which the CAPD provider creates the container that make up the cluster. Imagine that we could pass the network name as part of a DockerMachineTemplate (the content of the spec is defined in code here):

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: DockerMachineTemplate
metadata:
  name: capd-control-plane
  namespace: default
spec:
  template:
    spec:
      extraMounts:
        - containerPath: /var/run/docker.sock
          hostPath: /var/run/docker.sock

      network: kind # 🔰 This field does not exist yet.

Update 26 July 2020: added a section about local registry vs. caching proxy. Reworked the whole post (less noise, more useful information).

📝 Edit this page