Learning Kubernetes Controllers
Kubernetes’ extensibility is probably its biggest strength. Controllers and CRDs are all over the place. But finding the right information to begin writing a controller isn’t easy due to the sheer amount of tribal knowledge scattered everywhere. This post intends to help you start with controllers.
Let us begin with some terminology:
controller: a single loop that watches some objects. We often refer to this loop as “controller loop” or “sync loop” or “reconcile loop”.
controller binary is a binary that runs one or multiple sync loops. We often refer to it as “controllers”.
CRD (Custom Resource Definition) is a simple YAML manifest that describes a custom object, for example this CRD defines the acme.cert-manager.io/v1alpha3 Order resource. After applying this CRD to a Kubernetes cluster, you can apply manifests that have the kind “Order”
Note: CRDs and controllers are decoupled. You can apply a CRD manifest without having any controller binary running. It works in both ways: you can have a controller binary running that doesn’t require any custom objects. Traefik is a controller binary which relies on built-in Service objects.
Note: the “CRD” manifest is just a schema. It doesn’t carry any logic (except for the basic validation the apiserver does). The actual logic happens in the controller binary.
operator: the term “operator” is often used to mean a controller binary with its CRDs, for example the elastic operator.
Here are the links that I would give to anyone interested in writing their own controller:
sig-api-machinery/controllers.md gives a good intuition as to what a “controller” is:
A Kubernetes controller is an active reconciliation process. That is, it watches some object for the world’s desired state, and it watches the world’s actual state, too. Then, it sends instructions to try and make the world’s current state be more like the desired state.
Note: the client-go’s informers and listers and workqueue are not mandatory for writing a controller: you can just rely on client-go’s
Watchprimitive to reconcile state. The informers and workqueue add important scalability and reliability features but these also come with the cost of heavy abstractions. Use client-go’s
Watchfirst to have a sense of what it can offer, and then try out informers and workqueue.
Kubebuilder book is a nice starting point. Kubebuilder uses code generation a lot and that’s what most controllers use nowadays (Rancher uses a somehow forked version of controller-runtime and controller-tools, Wrangler, that also generates code but with their own “style” – for example, simple flat interfaces instead of client-go‘s deeply nested interfaces that don’t feel like Go).
The Kubernetes API conventions is an amazing document. It summarizes a lot of the “tribal knowledge” around naming and how sync loops are conceived and what they mean by “level-based behaviour”.
Github search “language:yaml language:go kubernetes controllers", tons of nice examples of controllers
cert-manager‘s codebase is a nice controller to take a look at
The ClusterAPI Meeting notes contains a ton of useful information on Machine, MachinePool… (crazy how much I learned from that).
statusfield is tricky. You can take a look at “conditions vs. phases vs. reasons".
The Kubernetes codebase itself is also a very nice read. It might feel overwhelming at first; I invite you to take a look at a few of the following sync loops contained in the
kubelet. Since each sync loop reads or updates different objects, I also detail which objects are updated or created by each sync loop:
binary sync loop = component reads creates updates kube-controller-manager
Pod ReplicaSet Deployment kube-controller-manager
The podcast episode “Gotime #105 – Kubernetes and Cloud Native” (Oct. 2019) with Joe Beda (initiator of Kubernetes) and Kris Nova is very interesting and tells us more about the genesis of the project, which things like why is Kubernetes written in Go and why client-go feels like Java. For example:
Kris Nova: I think there’s a fourth role. I think there’s what we called in the book an infrastructure engineer. These are effectively the folks like Joe and myself. These are the folks who are writing software to manage and mutate infrastructure behind the scenes. Folks who are contributing to Kubernetes, folks who are writing the software for the operators, folks who are writing admission controller implementations and so forth… I think it’s this very new engineer role, that we haven’t seen until we’ve started having – effectively, as Joe likes to put it, a platform-platform.
The operator-sdk (RedHat) is a package that aims at helping dealing with the whole scafollding when writing a sync loop. It relies on controller-runtime. I don’t use either of them but taking a look at these projects helps getting more understanding about the challenges (read: boilerplate) that comes when writing controllers. I personally write all the controller-related boilerplate myself (creating the queue, setting event handlers, running the loop itself…).
And a final note: CRDs are not necessary for writing a controller! You can write a tiny controller that watches the “standard” Kubernetes objects. That’s exactly what ingress controllers do: they watch for Service objects.
- Update 23 April 2020: I added a quote from Kris Nova! 😁
- Update 2 May 2020: Rephrased the “terminology” bullet points to make them clearer and added a note on CRD vs. controller binary.