Cache with Redis. Running the app in a Node.js cluster
Redis is a fast and reliable key-value store. It keeps the data in its memory, although Redis, by default, writes the data to the file system at least every 2…
Redis is a fast and reliable key-value store. It keeps the data in its memory, although Redis, by default, writes the data to the file system at least every 2 seconds. In the previous part of this series, we’ve used a cache stored in our application’s memory. While it is simple and efficient, it has its downsides. With applications where performance and availability are crucial, we often run multiple instances of our API. With that, the incoming traffic is load-balanced and redirected to multiple instances. Unfortunately, keeping the cache within the memory of the application means that multiple instances of our API do not share the same cache. Also, restarting the API means losing the cache. Because of all of that, it is worth looking into Redis. Setting up Redis Within this series, we’ve used Docker Compose to set up our architecture. It is also very straightforward to set up Redis with Docker. By default, Redis works on port 6379. docker-compose.yml version: "3" services: redis: image: "redis:alpine" ports: - "6379:6379" # ...To connect Redis to NestJS, we also need the cache-manager-redis-store library.npm install cache-manager-redis-storeUnfortunately, this library is not prepared to work with TypeScript. To deal with that, we can create our own declaration file. cacheManagerRedisStore.d.ts declare module 'cache-manager-redis-store' { import { CacheStoreFactory } from '@nestjs/common/cache/interfaces/cache-manager.interface'; const cacheStore: CacheStoreFactory; export = cacheStore; }To connect to Redis, we need two new environment variables: the host and the port. app.module.ts import { Module } from '@nestjs/common'; import { ConfigModule } from '@nestjs/config'; import * as Joi from '@hapi/joi'; @Module({ imports: [ ConfigModule.forRoot({ validationSchema: Joi.object({ REDIS_HOST: Joi.string().required(), REDIS_PORT: Joi.number().required(), // ... }) }), // ... ], controllers: [], providers: [], }) export class AppModule {} .env REDIS_HOST=localhost REDIS_PORT=6379 # ...Once we do all of the above, we can use it to establish a connection with Redis. posts.module.ts import * as redisStore from 'cache-manager-redis-store'; import { CacheModule, Module } from '@nestjs/common'; import PostsController from './posts.controller'; import PostsService from './posts.service'; import Post from './post.entity'; import { TypeOrmModule } from '@nestjs/typeorm'; import { SearchModule } from '../search/search.module'; import PostsSearchService from './postsSearch.service'; import { ConfigModule, ConfigService } from '@nestjs/config'; @Module({ imports: [ CacheModule.registerAsync({ imports: [ConfigModule], inject: [ConfigService], useFactory: (configService: ConfigService) => ({ store: redisStore, host: configService.get('REDIS_HOST'), port: configService.get('REDIS_PORT'), ttl: 120 }), }), TypeOrmModule.forFeature([Post]), SearchModule, ], controllers: [PostsController], providers: [PostsService, PostsSearchService], }) export class PostsModule {} Managing our Redis server with an interface As we use our app, we might want to look into our Redis data storage. A straightforward way to do that would be to set up Redis Commander through Docker Compose. docker-compose.yml version: "3" services: redis: image: "redis:alpine" ports: - "6379:6379" redis-commander: image: rediscommander/redis-commander:latest environment: - REDIS_HOSTS=local:redis:6379 ports: - "8081:8081" depends_on: - redis # ... With depends_on above we make sure that redis has been started before running Redis Commander Running Redis Commander in such a way creates a web user interface that we can see at http://localhost:8081/. Thanks to the way we set up the cache in the previous part of this series, we can now have multiple cache keys for the /posts endpoint. Running multiple instances of NestJS JavaScript is single-threaded in nature. Although that’s the case, in the tenth article of the Node.js TypeScript series, we’ve learned that Node.js is capable of performing multiple tasks at a time. Aside from the fact that it runs input and output operations in separate threads, Node.js allows us to create multiple processes. To prevent heavy traffic from putting a strain on our API, we can also launch a cluster of Node.js processes. Such child processes share server ports and work under the same address. With that, the cluster works as a load balancer. With Node.js we can also use Worker Threads. To read more about it, check out Node.js TypeScript #12. Introduction to Worker Threads with TypeScript runInCluster.ts import * as cluster from 'cluster'; import * as os from 'os'; export function runInCluster( bootstrap: () => Promise