Since 2015 we have been using Kubernetes within the Luminis organisation as the foundation of our Cloud Platform. A lot has changed since then, both in Kubernetes and in the Cloud Platform we build on top of Kubernetes. We continue to improve our platform, not only by adding new features but also by replacing existing features with better alternatives.
One of the custom features in our platform we liked to replace by off-the-shelve functionality was our load balancing solution. Until now we used a setup combining Amazon Elastic Load Balancers, HAProxy, Confd & Etcd. Although this setup continued to run without a hitch, we rather have a tested solution supported by the vast Kubernetes community then continuing to invest in our own custom solution. The obvious choice was to start using Kubernetes Ingress Resources. While good documentation exists on the Kubernetes website, it wasn’t that obvious what the best approach would be to deploy such a setup on AWS. For that this blog deserves a shoutout as it helped me getting started big time! It offers a good start at setting up an Ingress Controller including SSL offloading.
To replace our existing load balancing solution we had a couple of requirements that were classified as MUST HAVES:
- Flexible SSL certificate management (old setup needed a separate ELB for each certificate that we needed to support)
- Good load balancing strategies (Kubernetes services are way too limited for that) & sticky session support
- Configuration reload without downtime (important for our Blue-Green deployments)
- Possibility to add custom HTTP headers to individual backends
- Good performance & great reliability
While other alternatives exist, we decided to go with the NGINX based implementation which is part of the overall Kubernetes project. One of the reasons to select this implementation is the way this implementation directly connects to the Kubernetes pods instead of going through the Kubernetes service. This enables more advanced sticky session support compared to the IP based sticky session implementation of the current Kubernetes services.
When you want to deploy an Ingress Controller on a AWS environment you basically have 2 ways to get started:
- You create a Kubernetes service of the type LoadBalancer while you make sure that your Kubernetes environment is allowed to create ELB services in AWS. That way everything should be setup automatically for you.
- You do it “by hand” which is what I did. I had a couple of reasons to use this approach, one of them was that the ELB I wanted to use in this setup was already created, so I didn’t want Kubernetes to create an ELB for me. Instead of using a service of type LoadBalancer we use a service of type NodePort to expose the Ingress Controller.
Ingress Controller deployment
Amazon Elastic Load Balancer
We use the classic ELB, not the newer Application Load Balancer service provided by Amazon. The setup is very simple: the ELB accepts TCP traffic on port 80 and 443. So we don’t use the SSL offloading capabilities of the ELB: NGINX will do the SSL offloading in this setup.
Notice the 31111 and 31112 instance ports: we’ll come back to those in the Ingress Controller service definition.
The ELB is connected an EC2 AutoScaling group which launches the Kubernetes nodes (minions). Depending on your setup you probably don’t have an Ingress Controller running on every Kubernetes node (you could if you wanted to…), so it is important for the ELB to know on which servers in the autoscaling group the Ingress Controller pod is actually running. For this we configure a healthcheck:
If you know about NodePort services, you might wonder: but a NodePort service is available on every node, so all nodes will be healthy according this healthcheck? No, we use the OnlyLocal annotation to prevent that. But I get back to that later on in this blog.
Also notice that we use a 3rd port for the healthcheck: 31113. This port is dedicated for serving a healthcheck endpoint to determine if the Ingress Controller pod is healthy.
An important detail about this TCP based ELB setup is enabling Proxy Protocol Support. By default the ELB will not pass the original client IP Address on to the NGINX Ingress Controller and ultimately your application. Making sure you know the correct client IP address can be very important for your application, sticky session implementation and your access log data. If you want to make sure that your environment is properly configured it can be very convenient to simply see the headers of the HTTP request that end up in the application pod. Tools like the echoheaders application or the Docker image I developed for this task can help you out a lot.
Ingress Controller Kubernetes NodePort service
The Kubernetes NodePort service is a special kind of service that can be really helpfull if you want to expose a service outside your Kubernetes cluster. A NodePort service will be reachable on the public IP address of every Kubernetes node, which is very convenient if you want to connect an ELB to a port of a Kubernetes service, as the ELB will connect to any server in its EC2 AutoScaling group.
When you do nothing special, the NodePort will be reachable on any node in the Kubernetes cluster and Kubernetes will make sure that the request to the service port will end up on a node on which an associated pod is running. So if your pod isn’t running on the same node as the node to which the ELB is sending the request an additional network hop to the node running the pod is introduced. While fully functional, this setup isn’t ideal from a performance / network load perspective. Luckily Kubernetes has a workaround for this situation: the OnlyLocal annotation. This ensures that the NodePort service ports are only reachable on nodes on which the pod is actually running. And only these nodes will be seen as healthy by the ELB, which is exactly what we are after!
Ingress Controller Kubernetes NodePort deployment
Alongside the service we need an deployment to launch the actual pod(s):
This Kubernetes Deployment is a bit verbose, but still pretty basic. Most configuration is what you would find in any Kubernetes Deployment example. Couple of interesting points though:
- Default backend configuration: NGINX needs to be able to redirect any requests for which no Ingress rules exist. For that you can use this simple default backend deployment.
- Default SSL certificate: the reasons you might to configure a default SSL certificate are documented here.
- TLS dhparam: I have to write another blog on the whole SSL/TLS setup. In the meantime, if you want to configure NGINX with custom DH parameters you can have a look at this example.
Custom Configuration ConfigMap
The last thing I want to mention about the Deployment configuration is the ConfigMap that is passed to the nginx-ingress-controller process. Examples on how to use the custom configuration are well hidden ;-) in the documentation. The ConfigMap allows you to set global configuration for your Ingress Controller. A simplified version of the custom configuration we currently use:
I’m planning on writing some more blogs on the topic of using the Kubernetes NGINX Ingress Controller. The following topics are currently on my short list:
- proxy protocol support
- custom config injections to configure a specific Ingress
- sticky session support
- SSL/TLS certificate management & configuration and optimising your setup for maximal scores on security checks
But please feel free to contact me if you have any suggestions for upcoming blogs or any other questions!