In some scenarios when integrating with an Identity Provider (IdP) - Keycloak, Auth0, or any similar solution - a 400 error may occur in Nginx due to the size of the request and/or cookie (headers). To solve this issue, some additional configuration for your Nginx ingress is required.

Nginx Ingress is one of the resources used in Kubernetes to receive external requests and direct them to the appropriate service in the Kubernetes cluster. It merely acts as an nginx deployment in your cluster - so, x pods running nginx with a config file that can inherit values. It’s available in 2 flavors: one is free (community edition), the other one is the enterprise version.

In my context, I have deployed the Nginx Ingress through Helm, using the official chart; the cloud provider of choice for my Kubernetes cluster was hosted on AWS. After integrating the company’s IdP, we sterted to receive a “400 Bad Request - Cookie too large” when browsing the web app. So we started to dig a little bit over what was going on.

The first attempt to fix this was to patch the web app’s (packed also as a Helm chart) ingress nginx annotations:

1
2
3
4
5
6
7
8
9
annotations:
  [... old values ...]
  nginx.ingress.kubernetes.io/client-header-buffer-size: "64k"
  nginx.ingress.kubernetes.io/large-client-header-buffers: "4 64k"
  nginx.ingress.kubernetes.io/http2-max-header-size: "64k"
  nginx.ingress.kubernetes.io/proxy-body-size: "40m"
  nginx.ingress.kubernetes.io/proxy-buffering: "on"
  nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
  nginx.ingress.kubernetes.io/proxy-buffers: "8 256k"

Well, it didn’t help too much as we were still getting those errors.

Therefore, we needed to dig deeper in order to profile this issue. First thing - check the logs. It’s something that you should always start with when having similar issues. Gladly, it’s kind of easy to see what’s going on when deploying this stack on kubernetes - locate the Pod running the nginx ingress and tail the logs. Kubectl is needed, but I bet you already have it available on your system.

1
kubectl logs -f nginx-ingress-nginx-controller-54bdf7frqq

You’ll be able to see live incoming traffic. And you can notice several things about your request:

  • request_length
  • bytes_sent
  • status
  • vhost

You’ll notice that theoretically, the request should go through, but it doesn’t. Therefore, we need to understand how the Nginx ingress works under the hood. The whole idea is that custom configuration values can be passed through a ConfigMap which is associated to the Nginx deployment. This deployment fetches the keys and pushes them in the server config block as parameters. In order the request to go through and reach your app’s ingress, it should get past the server block - so this was actually where the request got blocked and you were receiving the error message.

Locating the nginx deployment and configMap:

1
2
3
4
5
~ >  kubectl get deployment -A | grep nginx
default        nginx-ingress-nginx-controller   2/2     0            2           28d

~ >  kubectl describe deployment nginx-ingress-nginx-controller | grep configmap
      --configmap=$(POD_NAMESPACE)/nginx-ingress-nginx-controller

Now, we need to make a list of params we wanna pass:

1
2
3
4
5
6
7
client-header-buffer-size: 64k
large-client-header-buffers: 4 64k
http2-max-field-size: 16k
http2-max-header-size: 128k
proxy-buffer-size: 128k
proxy-buffers: 4 256k
proxy-busy-buffers-size: 256k

Then we should patch the configMap:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
~ >  kubectl patch configmap/nginx-ingress-nginx-controller \
-n default \
--type merge \
-p '{
  "data": {
    "client-header-buffer-size": "64k",
    "http2-max-field-size": "16k",
    "http2-max-header-size": "128k",
    "large-client-header-buffers": "4 64k",
    "proxy-buffer-size": "128k",
    "proxy-buffers": "4 256k",
    "proxy-busy-buffers-size": "256k"
  }
}'

Once you do this, everything should be good. The /etc/nginx.conf (just as a reference to understand the concept) server block is now patched. Don’t forget though about each vhost’s settings as you can override these values and decrease them if needed.

Please note that if you choose to edit the configMap manually (kubectl edit or through Lens etc.), you need to also restart the nginx deployment manually so new values are used.