Setting up ElasticSearch with Localstack in docker-compose

Eugene Nikolaev
4 min readJul 10, 2023

Elasticsearch is a powerful search and analytics engine commonly used in modern applications. Localstack, on the other hand, is a tool that provides local versions of various AWS services, including Elasticsearch. By combining these two tools, you can set up a local development environment for Elasticsearch without the need for an AWS account. In this guide, we will walk through the process of setting up Elasticsearch with Localstack inside docker-compose environment.

Prerequisites: Before we begin, make sure you have the following prerequisites installed on your machine:

  • Docker: Localstack runs inside a Docker container.
  • Docker-compose

In this article I will set up Elastic and Localstack and connect to it from my app, it all will be running in same docker-compose file.

First lets make docker-compose file with service from where we are going to call Elastic, localstack and Elastic:

version: '3'
services:
my-service-app:
container_name: my-service-app
build:
context: ../app
dockerfile: Dockerfile
ports:
- "8089:8080"
environment:
AWS_OPEN_SEARCH_SERVICE_HOST: "http://localstack:4566/es/us-east-1/my-service-domain"
AWS_OPEN_SEARCH_SERVICE_INDEX: "my-index"
command: bash -c "sleep 35 && ./my-service" # wait warm up and migrations
networks:
- my-service

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "elastic_data:/usr/share/elasticsearch/data"
networks:
- my-service

localstack:
image: localstack/localstack:latest
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
ports:
- "4566:4566"
depends_on:
- elasticsearch
environment:
- SERVICES=es
- ES_CUSTOM_BACKEND=http://elasticsearch:9200
- ES_ENDPOINT_STRATEGY=path
- PERSISTENCE=${PERSISTENCE- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- DOCKER_HOST=unix:///var/run/docker.sock
- DEBUG=1
- LS_LOG=trace
- DEFAULT_REGION=us-east-1
- DATA_DIR=/var/lib/localstack
- PERSISTENCE=1
volumes:
- "localstack_data:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- my-service-internal

volumes:
elastic_data:
driver: local
localstack_data:
driver: local

networks:
my-service:
driver: bridge

These env vars in service contain Localstack domain and index which we will create later:

AWS_OPEN_SEARCH_SERVICE_HOST: "http://localstack:4566/es/us-east-1/my-service-domain"
AWS_OPEN_SEARCH_SERVICE_INDEX: "my-index"

In Localstack setup, this line is important:

- ES_ENDPOINT_STRATEGY=path

By default, Localstack will only send incoming requests to Elastic when request is towards existing Localstack domain, which looks like this:

my-app.us-east-1.es.localhost.localstack.cloud:4566

But when within docker network its not easy (if even possible) to address Localstack instance by that domain because normal way to address services in docker-compose is by name and port.

And if Localstack domain doesn’t exist, for some reason Localstack will instead address all requests to S3, even in case S3 is never set up and nothing in code mentions it. Be careful with this, because it might even give 200 status responses which look okay, though S3 is definitely not what we are trying to call here.

So this setting changes the way Localstack is expecting requests to this:

localstack:4566/es/us-east-1/my-service-domain/my-index

With that we will be able to normally access our Elastic through Localstack inside docker network.

Reference: https://docs.localstack.cloud/user-guide/aws/elasticsearch/#endpoints

After that we need to add one more thing. Localstack, unless its paid plan, doesn’t preserve any data after restart, so manually creating domain and ES index each time is not bothersome (though possible).

For creating domain and index we will add this bash script:

#!/bin/bash
# Syntax help: ${NAME-default-value-if-NAME-not-set}
export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION-us-east-1}
export AWS_ENDPOINT=${AWS_ENDPOINT-http://localhost:4566}
export ES_DOMAIN_NAME=${ES_DOMAIN_NAME-my-service-domain}
export AWS_ES_ENDPOINT=${AWS_ES_ENDPOINT-my-service-domain.us-east-1.es.localhost.localstack.cloud:4566} # ${ES_DOMAIN_NAME}.${AWS_DEFAULT_REGION}.es.localhost.localstack.cloud:4566
export AWS_ES_INDEX_NAME=${AWS_ES_INDEX_NAME-my-index}

if [[ -n "${AWS_PROFILE}" ]]; then
aws configure set aws_access_key_id some_access_key_id --profile ${AWS_PROFILE}
aws configure set aws_secret_access_key some_secret_access_key --profile ${AWS_PROFILE}
fi

aws --endpoint-url=${AWS_ENDPOINT} es my-service-domain \
--domain-name ${ES_DOMAIN_NAME} \
--elasticsearch-version 7.10 \
--elasticsearch-cluster-config '{ "InstanceType": "m3.xlarge.elasticsearch", "InstanceCount": 4, "DedicatedMasterEnabled": true, "ZoneAwarenessEnabled": true, "DedicatedMasterType": "m3.xlarge.elasticsearch", "DedicatedMasterCount": 3}'

curl -X PUT localstack:4566/es/us-east-1/my-service-domain/my-index

In this script we first set up env variables then create domain and index.

Here is Dockerfile for the script, its simple and just needs to have aws-cli, curl and run bash script from above:

FROM alpine

RUN apk update && \
apk add --no-cache bash curl && \
apk add --no-cache aws-cli

COPY aws_es_init.sh .
COPY .aws /root/.aws
COPY .aws .aws

RUN chmod +x aws_es_init.sh

CMD ["/bin/bash", "-c", "./aws_es_init.sh"]

Then lets add the script to docker-compose:

migration_aws_oss:
container_name: migration_aws_oss
build:
context: ../docker/aws_es
dockerfile: Dockerfile
depends_on:
- elasticsearch
environment:
AWS_PROFILE: "localstack"
AWS_ENDPOINT: "http://host.docker.internal:4566"
ES_DOMAIN_NAME: "my-service-logs"
AWS_ES_ENDPOINT: "host.docker.internal:4566"
AWS_ES_INDEX_NAME: "my-index"
command: bash -c "sleep 30 && ./aws_es_init.sh" # wait warm up Elastic cluster
networks:
- my-service

With this, on docker-compose up, it will run our script (make sure it exists in /docker/aws_es/aws_es_init.sh) which will create domain and index.

We can also add Kibana which is always nice to have next to Elastic like this:

kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.11.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
ports:
- "5601:5601"
networks:
- my-service

This setup is particularly useful for testing and experimenting with Elasticsearch features before deploying to a production environment.

Also if you are using golang there might be a problem with using go-elasticsearch ElasticSearch with version higher than 7.13, this error can happen:

“the client noticed that the server is not Elasticsearch and we do not support this unknown product”

This is because there are some fights between Amazon and Elastic teams and for now, the only workaround I am aware about, is to downgrade both golang library version and Elastic to < 7.13.

--

--

Eugene Nikolaev
0 Followers

Senior Software Engineer at GMO GlobalSign