5 min read
Original source

Cache with Redis. Running the app in a Node.js cluster

Redis is a fast and reliable key-value store. It keeps the data in its memory, although Redis, by default, writes the data to the file system at least every 2…

Redis is a fast and reliable key-value store. It keeps the data in its memory, although Redis, by default, writes the data to the file system at least every 2 seconds. In the previous part of this series, we’ve used a cache stored in our application’s memory. While it is simple and efficient, it has its downsides. With applications where performance and availability are crucial, we often run multiple instances of our API. With that, the incoming traffic is load-balanced and redirected to multiple instances. Unfortunately, keeping the cache within the memory of the application means that multiple instances of our API do not share the same cache. Also, restarting the API means losing the cache. Because of all of that, it is worth looking into Redis. Setting up Redis Within this series, we’ve used Docker Compose to set up our architecture. It is also very straightforward to set up Redis with Docker. By default, Redis works on port 6379. docker-compose.yml version: "3" services: redis: image: "redis:alpine" ports: - "6379:6379" # ...To connect Redis to NestJS, we also need the cache-manager-redis-store library.npm install cache-manager-redis-storeUnfortunately, this library is not prepared to work with TypeScript. To deal with that, we can create our own declaration file. cacheManagerRedisStore.d.ts declare module 'cache-manager-redis-store' { import { CacheStoreFactory } from '@nestjs/common/cache/interfaces/cache-manager.interface'; const cacheStore: CacheStoreFactory; export = cacheStore; }To connect to Redis, we need two new environment variables: the host and the port. app.module.ts import { Module } from '@nestjs/common'; import { ConfigModule } from '@nestjs/config'; import * as Joi from '@hapi/joi'; @Module({ imports: [ ConfigModule.forRoot({ validationSchema: Joi.object({ REDIS_HOST: Joi.string().required(), REDIS_PORT: Joi.number().required(), // ... }) }), // ... ], controllers: [], providers: [], }) export class AppModule {} .env REDIS_HOST=localhost REDIS_PORT=6379 # ...Once we do all of the above, we can use it to establish a connection with Redis. posts.module.ts import * as redisStore from 'cache-manager-redis-store'; import { CacheModule, Module } from '@nestjs/common'; import PostsController from './posts.controller'; import PostsService from './posts.service'; import Post from './post.entity'; import { TypeOrmModule } from '@nestjs/typeorm'; import { SearchModule } from '../search/search.module'; import PostsSearchService from './postsSearch.service'; import { ConfigModule, ConfigService } from '@nestjs/config'; @Module({ imports: [ CacheModule.registerAsync({ imports: [ConfigModule], inject: [ConfigService], useFactory: (configService: ConfigService) => ({ store: redisStore, host: configService.get('REDIS_HOST'), port: configService.get('REDIS_PORT'), ttl: 120 }), }), TypeOrmModule.forFeature([Post]), SearchModule, ], controllers: [PostsController], providers: [PostsService, PostsSearchService], }) export class PostsModule {} Managing our Redis server with an interface As we use our app, we might want to look into our Redis data storage. A straightforward way to do that would be to set up Redis Commander through Docker Compose. docker-compose.yml version: "3" services: redis: image: "redis:alpine" ports: - "6379:6379" redis-commander: image: rediscommander/redis-commander:latest environment: - REDIS_HOSTS=local:redis:6379 ports: - "8081:8081" depends_on: - redis # ... With depends_on above we make sure that redis has been started before running Redis Commander Running Redis Commander in such a way creates a web user interface that we can see at http://localhost:8081/. Thanks to the way we set up the cache in the previous part of this series, we can now have multiple cache keys for the /posts endpoint. Running multiple instances of NestJS JavaScript is single-threaded in nature. Although that’s the case, in the tenth article of the Node.js TypeScript series, we’ve learned that Node.js is capable of performing multiple tasks at a time. Aside from the fact that it runs input and output operations in separate threads, Node.js allows us to create multiple processes. To prevent heavy traffic from putting a strain on our API, we can also launch a cluster of Node.js processes. Such child processes share server ports and work under the same address. With that, the cluster works as a load balancer. With Node.js we can also use Worker Threads. To read more about it, check out Node.js TypeScript #12. Introduction to Worker Threads with TypeScript runInCluster.ts import * as cluster from 'cluster'; import * as os from 'os'; export function runInCluster( bootstrap: () => Promise ) { const numberOfCores = os.cpus().length; if (cluster.isMaster) { for (let i = 0; i < numberOfCores; ++i) { cluster.fork(); } } else { bootstrap(); } }In the example above, our main process creates a child process for each core in our CPU. By default, Node.js uses the round-robin approach in which the master process listens on the port we’ve opened. It accepts incoming connections and distributes them across all of the processes in our cluster. Round-robin is a default policy on all platforms except Windows. If you want to read more about the cluster and how to change the scheduling policy, check out Node.js TypeScript #11. Harnessing the power of many processes using a cluster To use the above logic, we need to supply it with our bootstrap function. A fitting place for that would be the main.ts file: main.ts import { NestFactory } from '@nestjs/core'; import { AppModule } from './app.module'; import * as cookieParser from 'cookie-parser'; import { ValidationPipe } from '@nestjs/common'; import { ExcludeNullInterceptor } from './utils/excludeNull.interceptor'; import { ConfigService } from '@nestjs/config'; import { config } from 'aws-sdk'; import { runInCluster } from './utils/runInCluster'; async function bootstrap() { const app = await NestFactory.create(AppModule); app.useGlobalPipes(new ValidationPipe({ transform: true })); app.useGlobalInterceptors(new ExcludeNullInterceptor()); app.use(cookieParser()); const configService = app.get(ConfigService); config.update({ accessKeyId: configService.get('AWS_ACCESS_KEY_ID'), secretAccessKey: configService.get('AWS_SECRET_ACCESS_KEY'), region: configService.get('AWS_REGION'), }); await app.listen(3000); } runInCluster(bootstrap);On Linux, we can easily check how many processes our cluster spawns with ps -e | grep node: Summary In this article, we added to the topic of caching by using Redis. One of its advantages is that the Redis cache can be shared across multiple instances of our application. To experience it, we’ve used the Node.js cluster to spawn multiple processes containing our API. The Node.js delegates the incoming requests to various processes, balancing the load. The post API with NestJS #24. Cache with Redis. Running the app in a Node.js cluster appeared first on Marcin Wanago Blog - JavaScript, both frontend and backend.

Cache with Redis. Running the app in a Node.js cluster | NestJS.io