Arjan Schaaf bio photo

Arjan Schaaf

Linux, AWS, Azure, containers, Kubernetes & Java rock my world. But far more important: I'm a husband and father of two. Love to BBQ and I'm famous in my world for my pulled pork, burgers & ribs :-)

Twitter LinkedIn Github

Introduction

Since 2015 we have been using Kubernetes within the Luminis organisation as the foundation of our Cloud Platform. A lot has changed since then, both in Kubernetes and in the Cloud Platform we build on top of Kubernetes. We continue to improve our platform, not only by adding new features but also by replacing existing features with better alternatives.

One of the custom features in our platform we liked to replace by off-the-shelve functionality was our load balancing solution. Until now we used a setup combining Amazon Elastic Load Balancers, HAProxy, Confd & Etcd. Although this setup continued to run without a hitch, we rather have a tested solution supported by the vast Kubernetes community then continuing to invest in our own custom solution. The obvious choice was to start using Kubernetes Ingress Resources. While good documentation exists on the Kubernetes website, it wasn’t that obvious what the best approach would be to deploy such a setup on AWS. For that this blog deserves a shoutout as it helped me getting started big time! It offers a good start at setting up an Ingress Controller including SSL offloading.

Requirements

To replace our existing load balancing solution we had a couple of requirements that were classified as MUST HAVES:

  • Flexible SSL certificate management (old setup needed a separate ELB for each certificate that we needed to support)
  • Good load balancing strategies (Kubernetes services are way too limited for that) & sticky session support
  • Configuration reload without downtime (important for our Blue-Green deployments)
  • Possibility to add custom HTTP headers to individual backends
  • Good performance & great reliability

While other alternatives exist, we decided to go with the NGINX based implementation which is part of the overall Kubernetes project. One of the reasons to select this implementation is the way this implementation directly connects to the Kubernetes pods instead of going through the Kubernetes service. This enables more advanced sticky session support compared to the IP based sticky session implementation of the current Kubernetes services.

Deployment overview

When you want to deploy an Ingress Controller on a AWS environment you basically have 2 ways to get started:

  1. You create a Kubernetes service of the type LoadBalancer while you make sure that your Kubernetes environment is allowed to create ELB services in AWS. That way everything should be setup automatically for you.
  2. You do it “by hand” which is what I did. I had a couple of reasons to use this approach, one of them was that the ELB I wanted to use in this setup was already created, so I didn’t want Kubernetes to create an ELB for me. Instead of using a service of type LoadBalancer we use a service of type NodePort to expose the Ingress Controller.

Deployment view of Ingress Controller on AWS

Ingress Controller deployment

Amazon Elastic Load Balancer

We use the classic ELB, not the newer Application Load Balancer service provided by Amazon. The setup is very simple: the ELB accepts TCP traffic on port 80 and 443. So we don’t use the SSL offloading capabilities of the ELB: NGINX will do the SSL offloading in this setup.

ELB Port Configuration screenshot

Notice the 31111 and 31112 instance ports: we’ll come back to those in the Ingress Controller service definition.

The ELB is connected an EC2 AutoScaling group which launches the Kubernetes nodes (minions). Depending on your setup you probably don’t have an Ingress Controller running on every Kubernetes node (you could if you wanted to…), so it is important for the ELB to know on which servers in the autoscaling group the Ingress Controller pod is actually running. For this we configure a healthcheck:

ELB Port Configuration screenshot

If you know about NodePort services, you might wonder: but a NodePort service is available on every node, so all nodes will be healthy according this healthcheck? No, we use the OnlyLocal annotation to prevent that. But I get back to that later on in this blog.

Also notice that we use a 3rd port for the healthcheck: 31113. This port is dedicated for serving a healthcheck endpoint to determine if the Ingress Controller pod is healthy.

Proxy Protocol

An important detail about this TCP based ELB setup is enabling Proxy Protocol Support. By default the ELB will not pass the original client IP Address on to the NGINX Ingress Controller and ultimately your application. Making sure you know the correct client IP address can be very important for your application, sticky session implementation and your access log data. If you want to make sure that your environment is properly configured it can be very convenient to simply see the headers of the HTTP request that end up in the application pod. Tools like the echoheaders application or the Docker image I developed for this task can help you out a lot.

Ingress Controller Kubernetes NodePort service

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "nginx-ingress",
    "namespace": "cloud-infra",
    "annotations": {
        "service.beta.kubernetes.io/external-traffic": "OnlyLocal"
    }
  },
  "spec": {
    "ports": [
        {"port": 80, "name": "http", "nodePort": 31111},
        {"port": 443, "name": "https", "nodePort": 31112},
        {"port": 10254, "name": "healthcheck", "nodePort": 31113},
        {"port": 18080, "name": "status", "nodePort": 31114}],
    "selector": {
      "app": "nginx-ingress-lb"
    },
    "type": "NodePort"
  }
}

The Kubernetes NodePort service is a special kind of service that can be really helpfull if you want to expose a service outside your Kubernetes cluster. A NodePort service will be reachable on the public IP address of every Kubernetes node, which is very convenient if you want to connect an ELB to a port of a Kubernetes service, as the ELB will connect to any server in its EC2 AutoScaling group.

When you do nothing special, the NodePort will be reachable on any node in the Kubernetes cluster and Kubernetes will make sure that the request to the service port will end up on a node on which an associated pod is running. So if your pod isn’t running on the same node as the node to which the ELB is sending the request an additional network hop to the node running the pod is introduced. While fully functional, this setup isn’t ideal from a performance / network load perspective. Luckily Kubernetes has a workaround for this situation: the OnlyLocal annotation. This ensures that the NodePort service ports are only reachable on nodes on which the pod is actually running. And only these nodes will be seen as healthy by the ELB, which is exactly what we are after!

Ingress Controller Kubernetes NodePort deployment

Alongside the service we need an deployment to launch the actual pod(s):

{
    "kind": "Deployment",
    "metadata": {
        "name": "nginx-ingress-controller",
        "namespace": "cloud-infra",
        "labels": {
            "app": "nginx-ingress-lb"
        }
    },
    "spec": {
        "replicas": 1,
        "selector": {
            "matchLabels": {
                "app": "nginx-ingress-lb"
            }
        },
        "template": {
            "metadata": {
                "labels": {
                    "app": "nginx-ingress-lb"
                }
            },
            "spec": {
                "containers": [
                    {
                        "name": "nginx-ingress-controller",
                        "image": "gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11",
                        "ports": [
                            {
                                "containerPort": 80
                            },
                            {
                                "containerPort": 443
                            },
                            {
                                "containerPort": 10254
                            },
                            {
                                "containerPort": 18080
                            }

                        ],
                        "livenessProbe": {
                            "httpGet": {
                                "path": "/healthz",
                                "port": 10254
                            },
                            "initialDelaySeconds": 10,
                            "timeoutSeconds": 5
                        },
                        "readinessProbe": {
                            "httpGet": {
                                "path": "/healthz",
                                "port": 10254
                            }
                        },
                        "args": [
                            "/nginx-ingress-controller",
                            "--apiserver-host=http://<YOUR_KUBERNETES_MASTER_ENDPOINT>:8080",
                            "--configmap=$(POD_NAMESPACE)/nginx-ingress-custom-configuration",
                            "--default-backend-service=$(POD_NAMESPACE)/default-http-backend",
                            "--default-ssl-certificate=$(POD_NAMESPACE)/my-tls-certificate"
                        ],
                        "env": [
                            {
                                "name": "POD_NAME",
                                "valueFrom": {"fieldRef": {"fieldPath": "metadata.name"}}
                            },
                            {
                                "name": "POD_NAMESPACE",
                                "valueFrom": {"fieldRef": {"fieldPath": "metadata.namespace"}}
                            }
						],
                        "resources": {
                            "requests": {
                                "cpu": "500m",
                                "memory": "1Gi"
                            },
                            "limits": {
                                "cpu": "1",
                                "memory": "1Gi"
                            }

                        },
                        "volumeMounts": [
                            {
                                "name": "tls-dhparam-vol",
                                "mountPath": "/etc/nginx-ssl/dhparam"
                            }
                        ],
                        "imagePullPolicy": "Always"
                    }
                ],
                "volumes": [
                    {
                        "name": "tls-dhparam-vol",
                        "secret": {
                            "secretName": "tls-dhparam"
                        }
                    }
                ],
                "restartPolicy": "Always"
            }
        },
        "strategy": {
            "type": "Recreate"
        }
    }
}

This Kubernetes Deployment is a bit verbose, but still pretty basic. Most configuration is what you would find in any Kubernetes Deployment example. Couple of interesting points though:

  • Default backend configuration: NGINX needs to be able to redirect any requests for which no Ingress rules exist. For that you can use this simple default backend deployment.
  • Default SSL certificate: the reasons you might to configure a default SSL certificate are documented here.
  • TLS dhparam: I have to write another blog on the whole SSL/TLS setup. In the meantime, if you want to configure NGINX with custom DH parameters you can have a look at this example.

Custom Configuration ConfigMap

The last thing I want to mention about the Deployment configuration is the ConfigMap that is passed to the nginx-ingress-controller process. Examples on how to use the custom configuration are well hidden ;-) in the documentation. The ConfigMap allows you to set global configuration for your Ingress Controller. A simplified version of the custom configuration we currently use:

{
    "apiVersion": "v1",
    "kind": "ConfigMap",
    "metadata": {
        "name": "nginx-ingress-custom-configuration",
        "namespace": "cloud-infra"
    },
    "data": {
        "enable-vts-status": "true",
        "ssl-dh-param": "cloud-infra/tls-dhparam",
        "ssl-redirect": "true",
        "use-proxy-protocol": "true"
    }
}

What’s next?

I’m planning on writing some more blogs on the topic of using the Kubernetes NGINX Ingress Controller. The following topics are currently on my short list:

  • proxy protocol support
  • custom config injections to configure a specific Ingress
  • sticky session support
  • SSL/TLS certificate management & configuration and optimising your setup for maximal scores on security checks

But please feel free to contact me if you have any suggestions for upcoming blogs or any other questions!