Purpose: using Nginx as an API Gateway in Kubernetes

The problem described here is quite simple if you have in mind a pretty usual situation where your team is writing several microservices (supposedly svcA, svcB, svcC) which need to be exposed from the backend side of the world so they can be later on consumed by a frontend app.

The setup on a classic env would be pretty straight forward:

  • deploy svcA -> http://svcA
  • deploy svcB -> http://svcB
  • deploy svcC -> http://svcC

Great! Now they’re accessible for anyone inside your network, so the show can go on. What if you need to expose all these services together following the /svcPrefix -> internal_service pattern aka using a reverse proxy? Simple! Start an nginx, create a config file, proxy_pass to your services: (simplified version just to explain the basics)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
server {
    listen 80;
    location ~* ^/svcA/(.*) {
        proxy_pass  http://svcA/$1$is_args$args;
    }
    location ~* ^/svcA/(.*) {
        proxy_pass  http://svcA/$1$is_args$args;
    }
    location ~* ^/svcA/(.*) {
        proxy_pass  http://svcA/$1$is_args$args;
    }
}

Excellent. But time flies by and your team wants to use Kubernetes for the production deployment. What to do now if you want to keep the same logic for your deployment? Pretty simple:

  1. build svcX as images
  2. push them to your images registry
  3. create a deployment in Kubernetes for each service
  4. expose them internally through services

Cool, so right now, you can access internally your unexposed services on http://svcX. Half of the work is done.

But what to do with the flexible nginx config we were talking about if you don’t want to learn more on the Nginx Ingress Controller and how to map an API Gateway?

We can follow the same logic as described in the first and more common example by using an nginx container. We can achieve the same functionality through ConfigMaps that will help us configure the entire logic. Just for the sake of it, we can even consider multiple common configuration files which I’ve omitted in my initial example.

So:

  • proxy.conf
1
2
3
4
5
proxy_set_header  Host              $http_host;
proxy_set_header  X-Real-IP         $remote_addr;
proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header  X-Forwarded-Proto $scheme;
proxy_read_timeout                  900;
  • cors.conf (dealing with preflight)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS, PATCH, HEAD' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Origin,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,User-Agent,X-AnyCustomHeader' always;

if ($request_method = 'OPTIONS') {
    add_header 'Access-Control-Allow-Origin' '*' always;
    add_header 'Access-Control-Allow-Credentials' 'true' always;
    add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS, PATCH' always;
    add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Origin,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,User-Agent,X-AnyCustomHeader' always;
    add_header 'Access-Control-Max-Age' 0;
    add_header 'Content-Type' 'text/plain charset=UTF-8';
    add_header 'Content-Length' 0;
    return 204;
}
  • nginx.conf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                        '$status $body_bytes_sent "$http_referer" '
                        '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen 80;

        resolver kube-dns.kube-system.svc.cluster.local valid=5s;

        location /status {
            return 200;
        }

        location / {
            return 200;
        }

        location ~* ^/svcA/(.*) {
            include cors.conf;
            include proxy.conf;
            proxy_pass  http://svcA.default.svc.cluster.local/$1$is_args$args;
        }

        location ~* ^/svcB/(.*) {
            include cors.conf;
            include proxy.conf;
            proxy_pass  http://svcB.default.svc.cluster.local/$1$is_args$args;
        }

        location ~* ^/svcC/(.*) {
            include cors.conf;
            include proxy.conf;
            proxy_pass  http://svcC.default.svc.cluster.local/$1$is_args$args;
        }
    }
}

The interesting parts here are:

  • the proxy_pass destionation hosts - they’re pointing to a service (svc mark) deployed under a namespace (in my case, default one)
  • the resolver - we’re using the internal resolver system from kube system -> kube-dns.kube-system.svc.cluster.local; if you don’t set this up correctly, you’ll end up with a bunch of 502 errors

Now we’re ready to deploy all these config files inside Kubernetes under a ConfigMap:

  • nginx-api-gateway-configMap.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-api-gw-config
data:
  proxy.conf: |
    <<content of proxy.conf>>    
  cors.conf: |
    <<content of cors.conf>>    
  nginx.conf: |
    <<content of nginx.conf>>    
  • kubectl apply -f nginx-api-gateway-configMap.yaml

We’re ready now to create the Deployment of the nginx container. As mentioned above, we’re going to customize the deployment by mounting the config map’s contents under a volume. Then we can reference them inside the container and expose the services. Below the kube yaml which implements this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pod-nginx-api-gw
  labels:
    app: nginx-api-gw
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-api-gw
  template:
    metadata:
      labels:
        app: nginx-api-gw
    spec:
      hostNetwork: false
      dnsPolicy: ClusterFirst
      containers:
        - name: nginx-api-gw
          image: nginx:alpine
          resources:
            limits:
              memory: "128Mi"
              cpu: "100m"
          volumeMounts:
            - name: api-gw-files
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
            - name: api-gw-files
              mountPath: /etc/nginx/proxy.conf
              subPath: proxy.conf
            - name: api-gw-files
              mountPath: /etc/nginx/cors.conf
              subPath: cors.conf
      volumes:
        - name: api-gw-files
          configMap:
            name: nginx-api-gw-config

After applying this configuration, you can expose your nginx Deployment through an Ingress and therefore be able to access your internal services.

What to do more? Well, quite easy, you can implement SSL termination for example in a similar manner - mounting from secrets your certificates and configuring the main conf accordingly.

Simplest way to create a flexible and dynamic API gateway in order to expose entire stacks of services.