8 min read
Original source

Increasing the developer experience with Docker Compose

The previous article taught us how to use Docker and Docker Compose with NestJS. In this article, we expand our knowledge by applying various tricks and tips…

The previous article taught us how to use Docker and Docker Compose with NestJS. In this article, we expand our knowledge by applying various tricks and tips on increasing the development experience with Docker and Docker Compose. Check out this repository if you want to see the full code for this article. Building the Docker Image automatically In the previous part of this series, we built our basic Dockerfile. Dockerfile FROM node:18-alpine WORKDIR /user/src/app COPY . . RUN npm ci --omit=dev RUN npm run build USER node CMD ["npm", "run", "start:prod"]Then, we built the Docker image by running an appropriate command and giving it a tag.docker build --tag "nestjs-api" .Then, we added the above nestjs-api tag to our Docker Compose configuration. docker-compose.yml version: "3" services: nestjs-api: image: nestjs-api env_file: - .env ports: - "3000:3000" depends_on: - postgres networks: - postgres # ...Once we have the above file, we can run docker-compose up to run our application. Automating the process of building the image Unfortunately, the above process requires the developers to run two commands instead of one. Also, we must remember to run docker build every time there is a change in our code. Instead, we can point Docker Compose to our Dockerfile and expect it to build the required Docker image. docker-compose.yml version: "3" services: postgres: image: postgres:15.1 networks: - postgres volumes: - /data/postgres:/data/postgres env_file: - docker.env pgadmin: image: dpage/pgadmin4:6.18 networks: - postgres ports: - "8080:80" volumes: - /data/pgadmin:/root/.pgadmin env_file: - docker.env nestjs-api: build: context: . env_file: - .env ports: - "3000:3000" depends_on: - postgres networks: - postgres networks: postgres: driver: bridgeThanks to adding the build section to our configuration and pointing to the directory with the Dockerfile, we can expect Docker Compose to build the necessary image. There is one caveat, though. To ensure that Docker Compose always rebuilds the image even if an old version is available, we need to add the --build flag.docker-compose up --build Dealing with cache By adding the --build flag, we expect Docker Compose to rebuild our image every time we run docker-compose up. Let’s look at how Docker handles cache to avoid waiting too much time for the build to finish. Each instruction in our Dockerfile roughly translates to a layer in our image. Therefore, whenever a layer changes, it must be rebuilt together with all the following layers. Let’s say we made a slight change in our main.ts file. Unfortunately, it affects the COPY . . command since we use it to copy all of our files. FROM node:18-alpine WORKDIR /user/src/app COPY . . RUN npm ci –omit=dev RUN npm run build USER node CMD [“npm”, “run”, “start:prod”] Due to how we structured our Dockerfile, making changes to our source code causes Docker to reinitialize our whole node_modules directory with the npm ci command. Let’s improve that by changing how we use the COPY instruction. Dockerfile FROM node:18-alpine WORKDIR /user/src/app COPY package.json package-lock.json ./ RUN npm ci --omit=dev COPY . . RUN npm run build USER node CMD ["npm", "run", "start:prod"]Above, we first copy only the package.json and package-lock.json files. Then, we install all of the dependencies. We can see it as a milestone that Docker reaches and stores in the cache. Now, Docker knows that modifying the main.ts file does not affect the npm ci command and does not reinstall the packages unnecessarily. FROM node:18-alpine WORKDIR /user/src/app COPY package.json package-lock.json ./ RUN npm ci –omit=dev COPY . . RUN npm run build USER node CMD [“npm”, “run”, “start:prod”] The above approach can drastically decrease the time required for the Docker image to be built. Restarting the application on changes Applying changes we made to our source code now takes a bit of work. First, we need to stop all of our Docker containers and then rerun them. It causes the Docker image with the API to be rebuilt. Instead, when running our application in development, we can do the following: install the necessary dependencies, run the npm run start:dev command When using the above approach, NestJS watches for any changes made to the source code and restarts automatically. Implementing a multi-stage Docker build The issue is that when we build our Docker image using the Dockerfile, it always creates a production build and ends with npm run start:prod. The first step to changing the above is to implement a multi-stage build. Thanks to this approach, we don’t need separate Dockerfile for development and production. Instead, we divide our Dockerfile into stages. Each stage begins with a FROM statement. We can copy files between stages, leaving behind any files we don’t need anymore. Thanks to that, we can achieve a smaller Docker image. Dockerfile # Installing dependencies: FROM node:18-alpine AS install-dependencies WORKDIR /user/src/app COPY package.json package-lock.json ./ RUN npm ci --omit=dev COPY . . # Creating a build: FROM node:18-alpine AS create-build WORKDIR /user/src/app COPY --from=install-dependencies /user/src/app ./ RUN npm run build USER node # Running the application: FROM node:18-alpine AS run WORKDIR /user/src/app COPY --from=install-dependencies /user/src/app/node_modules ./node_modules COPY --from=create-build /user/src/app/dist ./dist COPY package.json ./ CMD ["npm", "run", "start:prod"] Each of our stages above use the node:18-alpine image as base, but that does not have to be the case. Please notice above that our final Docker image contains only node_modules, dist, and package.json. Thanks to that, we’ve managed to shave off some unnecessary data by copying only the files necessary to run the application. Modifying the Docker Compose configuration Thanks to dividing our Dockerfile into stages, we can tell Docker Compose to target a specific stage. docker-compose.yml version: "3" services: nestjs-api: build: context: . target: install-dependencies command: npm run start:dev volumes: - ./src:/user/src/app/src env_file: - .env ports: - "3000:3000" depends_on: - postgres networks: - postgres # ...Above, we explicitly tell Docker only to run the install-dependencies stage from our Dockerfile. This means that Docker won’t create a production build. Since our install-dependencies stage does not contain the CMD instruction, we need some way to tell Docker what to do. We do that by adding command: npm run start:dev to our Docker Compose configuration. So far, we’ve been using the volumes property to allow our PostgreSQL Docker container to persist the data outside of the container. Thanks to doing that, when we run our PostgreSQL container after it’s been shut down, we don’t end up with an empty database. We can use the same approach to the Docker container with our NestJS application. Thanks to adding ./src:/user/src/app/src to our volumes, Docker synchronizes the  src directory in the Docker container with the src directory on our host machine. Thanks to that, whenever we change our source code, the npm run start:dev process is aware of it and restarts our NestJS application. Running the debugger A very big part of the developer experience is to be able to use a debugger. Fortunately, we can connect the debugger to a Node.js application running in a container. First, let’s add a new script into our package.json file. package.json { "scripts": { "start:inspect": "nest start --debug 0.0.0.0:9229 --watch --exec 'node --inspect-brk'", // ... }, // ... }A few important things are happening above. First, we add the --debug 0.0.0.0:9229 to establish a WebSocket connection that our debugger can connect to. Our debugger also might require the --inspect-brk flag, but the NestJS CLI does not support it out of the box anymore. Because of that, we need to use the hack with the --exec flag. We also need to allow our host machine to establish a connection with our Docker container on port 9229. To do that, we need to slightly alter the ports section in our Docker Compose configuration. docker-compose.yml version: "3" services: nestjs-api: build: context: . target: install-dependencies command: npm run start:inspect volumes: - ./src:/user/src/app/src env_file: - .env ports: - "3000:3000" - "9229:9229" depends_on: - postgres networks: - postgres # ... Please notice that we are running the npm run start:inspect command above. Debugging through WebStorm To debug our application running in a Docker container using WebStorm, we first need to run docker-compose up --build in the terminal to run all of our containers. Remember that because we’ve used the --inspect-brk flag, our NestJS application will not run until we connect the debugger. Then, we need to go to “Run -> Edit Configurations” and create the “Attach to Node.js/Chrome” configuration by clicking on the plus icon. Once we do that, we need to check the “Reconnect automatically” checkbox so that the debugger reconnects when our application restarts after changes. As soon as we choose “Run -> Debug ‘Attach to container’, WebStorm connects the debugger through a WebSocket to our Docker container. You can also debu

Increasing the developer experience with Docker Compose | NestJS.io