I often use mitmproxy in order to see the HTTP calls that programs are making under the hood. vcert, a tool used for operating Venafi TPP and Venafi Cloud, did not seem to be working with mitmproxy. This post presents the steps I took to discover that the issue comes from an unsupported feature of mitmproxy: TLS renegotiation.
We often talk about avoiding unnecessary comments that needlessly paraphrase what the code does. In this article, I gathered some thoughts about why writing comments is as important as writing the code itself, and how to spot comments that should be refactored using the 'what' and the 'why'.
Although the Kubernetes documentation is excellent, the API reference does not document the conditions that can be found in a deployment's status. The Available condition has always eluded me!
Kind offers an excellent UX to Kubernetes developers but lacks support for caching images; each time you recreate a new cluster, all the previous downloaded images are gone. In this post, I explain why the default Docker network is a trap and how to set up a registry & make sure that it actually works.
Mitmproxy is an excellent tool that helps us understand what network calls are made by programs. And kubectl is one of these interesting programs, but it uses a mutual TLS authentication which is tricky to get right.
Dynamic libraries and PIC (position-independant code) are great features of modern systems. But trying to get the right library built can become a nightmare as soon as you rely on other libraries that may or may not have these features in the first place... In this post, I detail the hacks I made to the ./configure-based build system of Yices, a C++ library.
Terraform makes it easy to manage infrastructure at scale; you might want to share code between modules, and that's where it becomes tricky. In this post, I try to give some clues on how to use terraform across private Github repos.
Kubernetes' extensibility is probably its biggest strength. Controllers and CRDs are all over the place. But finding the right information to begin writing a controller isn't easy due to the sheer amount of tribal knowledge scattered everywhere. Here are some links to help you start.
Client-go is the client library that allows anyone (including Kubernetes itself) to talk to the Kubernetes apiserver. Recently, the Kubernetes team chose to release a breaking version of client-go that adds support for context.Context, without really giving anyone notice. In this post, I detail the workaround and what that happened.
In one of my previous posts, I studied how traffic flows when using Kubernetes Services. While drawing the last diagram, I did not clearly see how traffic could make its way back to the user. In this post, I focus on how packets find their way back and what makes stateless rewriting interesting.
The Service and Ingress respectively brings L4 and L7 traffics to your pods. In this article, I focus on how traffic flows in and what are the interactions between the ingress controller and the service-lb controller (the thing that creates the external load balancer). I also detail how the `hostPort` approach shapes traffic.
Some pods were unable to connect to the kube-proxy pod on one of my GKE Kubernetes clusters. This post present an in-depth investigation using tcpdump, wireshark and iptables tracing.
I want to avoid using the expensive Google Network Load Balancer and instead do the load balancing in-cluster using akrobateo, which acts as a LoadBalancer controller.
At some point, the Go team chose to disable the proxy for requests coming from localhost or 127.0.0.1. This is annoying when debugging services locally.
Readability is a property we all love about Go. In other languages, it might be fine to have a lot of nested if statements; in Go, it is a good practice to keep away from overly-nested logic.
GO111MODULE is all over the place. It appears in README install instructions, in Dockerfiles, in makefiles. On top of that, the behavior of GO111MODULE has changed from Go 1.11 to 1.12, changed again with Go 1.13 and Go 1.15 and changed a last time in Go 1.16, and is stable since then.
Although progress is being made, Kubernetes controllers and operators still require prior knowledge about Kubernetes internals. Information on how to set the status is scattered across comments, issues, PRs and the Kubernetes code itself. Conditions may be a good solution for your controller, but for what?