AWS EC2 Kubernetes Elasticsearch file descriptors issue

AWS EC2 Kubernetes Elasticsearch file descriptors issue

I was trying to create a Kubernetes cluster and install Elasticsearch. The cluster was created using rke with AWS EC2 instances. EC2 instances are using ECS optimized Linux image. Once Elasticsearch is installed using Helm, I saw this error:

ERROR: [1] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch.log

This is a common error when installing Elasticsearch. The max file descriptors should be set to at least 65535.

There are many different ways to fix this issue.

  • Create custom container image with modified /etc/security/limits.conf file.
  • Create custom container image with entrypoint script that calls ulimit -n 63536.

Some articles suggest that this issue can be fixed by calling ulimit -n 65536 in a Kubernetes init container. Actually, this is wrong, calling ulimit -n 65536 only affects the current process.

For AWS Linux image, the correct way is to modify the file /etc/sysconfig/docker. This file specifies the options passed to Docker process. The default value of nofile is 1024:4096, which means the max value of file descriptors is 4096.

All we need to do is modifying this file with higher number of nofile. This can be done using a shell script as user data of EC2 instances.

Below is the content of shell script file

sed -i 's/^OPTIONS=.*/OPTIONS=\"--default-ulimit nofile=65535:65535\"/' /etc/sysconfig/docker && systemctl restart docker

AWS EC2 instances are provisioned using Terraform, we can modify resource aws_launch_template to include user data.

resource "aws_launch_template" "instance" {

  user_data = filebase64("${path.module}/")

After creating new EC2 instances and installing Kubernetes cluster, Elasticsearch can be started successfully.

© 2021 VividCode