Setting up local environments 💻 can be frustrating, sometimes it gives us nightmares.
The time and dev efforts required to set the dev local environment depend on the services we are trying to run.
Remember the last time you joined a new team?
You might have faced difficulties configuring your local machine, installing development dependencies, installing development tools, pulling repositories, performing the steps from the README, and getting everything running and working locally without actually knowing anything about the code and the underlying architecture.
You may expect at least a week or more of onboarding for new developers.
With the help of Docker and its toolset, we can make things a whole lot easier.
You can refer to Docker and docker-compose.
PRE-REQUISITES:
- Docker should be installed on the system.
- For installation, refer to this.
- Check if Docker is installed successfully using the command:
docker --version
The output would be like this:
Docker version 20.10.13, build a224086
- Respective Dockerfile should be present in the repositories for the applications to build the image.
For example:
FROM node:14-alpine AS api-base
WORKDIR /src
ADD package.json /src
RUN npm install
ADD . /src
EXPOSE 3000
CMD ["sh", "start.sh"]
In the above Dockerfile, we are using Node 14 Alpine image from the Docker hub for our application.
WORKDIR
The working directory for the docker container
ADD
Adding the package.json file to the Docker Image
RUN
Running an npm install
to install the packages
EXPOSE
Exposing the port 3000 on which the container will run
CMD
This is the command to be executed once the Docker container starts. Here, we are running a custom script. It can vary as per your use case.
Note: You can overwrite the CMD argument while executing the image as well.
If you want to ignore specific directories like node_modules
, then you can create a .dockerignore file and add the specific files/folders to it. This will exclude them from the image. In our case, here is our sample of the .dockerignore file:
node_modules
package-lock.json
.github
Setting up a Node.js docker environment with dependent services
It is entirely up to the project requirements which all services to use. In our case, we have two node.js code repositories:
- api: REST APIs built using node.js and MySQL, Cassandra, Memcached, and RabbitMQ as dependent services
- web: Front-facing presentation repository built on top of REST APIs
- Additionally, we are using Nginx as a reverse proxy to route traffic to api (running on port 3000) and web (running on port 5000) applications.
Before bringing up the application services, we need to bring up the dependent services and setup tasks (like DB migrations and seeds for our use case).
Create a docker-compose.yml file to bring the dependent services and the application services up
---
version: '3'
services:
mysql:
image: mysql:8.0
container_name: mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ${STORAGE_PATH}/mysql_8.0:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
healthcheck:
test: mysql -h localhost -uroot -proot -e "SELECT 1;"
cassandra:
image: cassandra:3.11
container_name: cassandra
ports:
- 9042:9042
volumes:
- ${STORAGE_PATH}/cassandra_3.11:/var/lib/cassandra
healthcheck:
test: cqlsh --username cassandra --password cassandra -e "DESCRIBE cluster;"
memcached:
image: memcached:latest
container_name: memcached
ports:
- 11211:11211
rabbitmq:
image: rabbitmq:latest
container_name: rabbitmq
ports:
- 5672:5672
- 15672:15672
volumes:
- ${STORAGE_PATH}/rabbitmq_latest:/var/lib/rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
healthcheck:
test: rabbitmq-diagnostics -q ping
api_setup:
image: api:latest
container_name: api_setup
volumes:
- ${API_BASE_PATH}:/src
build: ${API_BASE_PATH}
env_file: ${API_BASE_PATH}/.env
depends_on:
mysql:
condition: service_healthy
memcached:
condition: service_started
cassandra:
condition: service_healthy
rabbitmq:
condition: service_healthy
command: sh -c 'npm i && node db/seed.js && node devops/exec/configStrategy.js --config-file-path /src/configStrategy.json && node devops/exec/configStrategy.js --config-file-path /src/configStrategy.json --activate-configs && node db/migrate.js'
web:
image: web:latest
container_name: web
ports:
- 5000:5000
volumes:
- ${WEB_BASE_PATH}:/src
build: ${WEB_BASE_PATH}
env_file: ${WEB_BASE_PATH}/.env
command: sh -c 'npm i && npx next dev -p 5000'
depends_on:
api:
condition: service_started
api:
image: api:latest
container_name: api
ports:
- 3000:3000
volumes:
- ${API_BASE_PATH}:/src
build: ${API_BASE_PATH}
env_file: ${API_BASE_PATH}/.env
depends_on:
mysql:
condition: service_healthy
memcached:
condition: service_started
cassandra:
condition: service_healthy
rabbitmq:
condition: service_healthy
api_setup:
condition: service_completed_successfully
command: sh -c 'npm i && sh start.sh'
nginx:
image: nginx:latest
container_name: nginx
ports:
- 80:80
- 443:443
volumes:
- ${API_BASE_PATH}/setup/nginx.conf:/etc/nginx/nginx.conf
- ${API_BASE_PATH}/setup/self.crt:/etc/nginx/self.crt
- ${API_BASE_PATH}/setup/self.key:/etc/nginx/self.key
depends_on:
- web
- api
version: '3.0'
: this first line of code must be provided on the docker-compose.yml file. This will tell docker-compose, which version of docker-compose to be usedservices:
inside this tag, we will tell docker which service we want to createmysql:
here the first service is MySQL. You're free to choose the nameimage: mysql:8.0
: We tell docker to use MySQL image with version 8.0 from docker hubenvironment:
here you can pass environment variables. In our case, we are settingMYSQL_ROOT_PASSWORD
ports:
this will be the container to host port mappings. In our case, it's 3000:3000. You can specify multiple port mappings as well, depending on which all ports you exposed inside the Dockerfilevolumes:
the volume will help us to keep our data alive through a restart. Any changes made to the local files will reflect in the container (might require a restart of the container)
Note: Since we are binding the volumes with the container volume, for our use case any changes made to theweb
application local files will be automatically reflected in the browser as we have set-up hot-reload via webpack for ourweb
application.
command:
you can specify a command argument to a service like we have done inmysql
to allow clients to connect to the server using mysql_native_password. This is the command which the container will execute once it starts running. You can also specify multiple commands.healthcheck:
to ensure that the services are running and ready to accept the connection.env_file:
.env file containing required environment variables for the applicationdepends_on:
resolves dependencies between services.
For example: to start the api service - MySQL, Cassandra, and RabbitMQ needs to be healthy, whereas api_setup needs to be completed. So the api service would wait until all these dependent services are ready.build:
the path to the source code repository which you want to build. The Dockerfile should be present at this path.
Note: If the image is not present already for the application, docker-compose will create an image with its tag name using Dockerfile present at the application directory. Here in our case it will create an image for api and web as api:latest and web:latest via their respective Dockerfile(s).
Add Nginx as a reverse proxy to access the service via browser
Below is an example nginx.conf file which you need in the location /Users/dev/Documents/api/setup/
to map it to the volume of the container
worker_processes 4;
error_log error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '"[$time_iso8601]": remote_addr="$remote_addr" - upstream_addr="$upstream_addr" - upstream_response_time="$upstream_response_time" - status="$status" - body_bytes_sent="$body_bytes_sent" - request="$request" - http_referer="$http_referer" - http_user_agent="$http_user_agent" - server_name="$server_name" - http_x_forwarded_for="$http_x_forwarded_for" - upstream_status="$upstream_status" - proxy_add_x_forwarded_for="$proxy_add_x_forwarded_for" - http_via="$http_via" - request_time="$request_time" - connection="$connection" - connection_requests="$connection_requests" - upstream_http_x_request_id="$upstream_http_x_request_id" - host="$host" - ssl_protocol="$ssl_protocol" - binary_remote_addr="$binary_remote_addr" - request_id="$request_id" - document_root="$document_root"';
access_log access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_static on;
gzip_types text/plain application/json;
gzip_min_length 500;
gzip_http_version 1.1;
gzip_proxied expired no-cache no-store private auth;
proxy_intercept_errors on;
server_names_hash_bucket_size 128;
client_max_body_size 32m;
server_tokens off;
server {
listen 80;
server_name yourdomain.com;
rewrite ^/(.*)$ https://yourdomain.com/$1 permanent;
}
# HTTPS server
server {
listen 443 ssl;
server_name yourdomain.com;
proxy_next_upstream error;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
ssl_certificate self.crt;
ssl_certificate_key self.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
location ~ \.(ico) {
return 200;
}
location /api {
proxy_pass http://api:3000;
proxy_http_version 1.1;
proxy_connect_timeout 300;
proxy_read_timeout 300;
proxy_send_timeout 300;
send_timeout 300;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location / {
proxy_pass http://web:5000;
proxy_http_version 1.1;
proxy_connect_timeout 300;
proxy_read_timeout 300;
proxy_send_timeout 300;
send_timeout 300;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
include servers/*;
}
Please make sure self.crt
and self.key
files are also present in the path /Users/dev/Documents/api/setup/
since we are running Nginx in SSL mode.
Note: you can generate ssl certificate by running following command : openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out self.crt -keyout self.key
In the above nginx.conf, we are accessing the respective services by their container names. For example http://web:5000
Also, make sure to add the following entries present in your /etc/hosts
to access the service via its domain name:
127.0.0.1 yourdomain.com
Run multiple containers through docker-compose:
- Export the following ENV variables:
export API_BASE_PATH=/Users/dev/Documents/api
export WEB_BASE_PATH=/Users/dev/Documents/web
export STORAGE_PATH=/Users/dev
API_BASE_PATH: API repository path
WEB_BASE_PATH: web repository path
STORAGE_PATH: path to store MySQL and Cassandra data
- Run the following command to bring the services up
docker-compose up -d
- To check whether the containers are up and running use the command:
docker-compose ps
- To check the container logs use the command:
docker-compose logs -f <container_id|service_name>
// replace the container id or service name here
- You can connect to a MySQL server running in a container using a MySQL client like Sequel Ace using the following credentials:
Host: 127.0.0.1
User: root
Password: root
- You can now access the endpoint via
https://yourdomain.com
Note: In Case you are receiving an error on the browser i.e. it shows the Site is Unsafe, just type thisisunsafe
on the browser window(Only for Chrome Browser).
Conclusion: By using Docker, the time to set up the local environment will be drastically reduced (from days to hours, even within minutes).