Uploading public files to Amazon S3
While storing files directly in the database is doable, it might not be the best approach. Files can take a lot of space, and it might impact the performance…
While storing files directly in the database is doable, it might not be the best approach. Files can take a lot of space, and it might impact the performance of the application. Also, it increases the size of the database and, therefore, makes backups bigger and slower. A good alternative is storing files separately using an external provider, such as Google Cloud, Azure, or Amazon AWS. In this article, we look into uploading files to Amazon Simple Storage Service, also referred to as S3. You can find all the code from this series in this repository. Connecting to Amazon S3 Amazon S3 provides storage that we can use with any type of file. We organize files into buckets and manage them in our API through an SDK. Once we create the AWS account, we can log in as a root user. Even though we might authorize as root to use the S3 through our API, this is not the best approach. Setting up a user Let’s create a new user that has a restricted set of permissions. To do so, we need to open the Identity and Access Management (IAM) panel and create a user: Since we want this user to be able to manage everything connected to S3, let’s set up proper access. After doing that, we are presented with Access key ID and Secret access key. We need those to connect to AWS through our API. We also need to choose one of the available regions. Let’s add them to our .env file: .env # ... AWS_REGION=eu-central-1 AWS_ACCESS_KEY_ID=******* AWS_SECRET_ACCESS_KEY=*******Also, let’s add it to our environment variables validation schema in AppModule:ConfigModule.forRoot({ validationSchema: Joi.object({ POSTGRES_HOST: Joi.string().required(), POSTGRES_PORT: Joi.number().required(), POSTGRES_USER: Joi.string().required(), POSTGRES_PASSWORD: Joi.string().required(), POSTGRES_DB: Joi.string().required(), JWT_SECRET: Joi.string().required(), JWT_EXPIRATION_TIME: Joi.string().required(), AWS_REGION: Joi.string().required(), AWS_ACCESS_KEY_ID: Joi.string().required(), AWS_SECRET_ACCESS_KEY: Joi.string().required(), PORT: Joi.number(), }) }), Connecting to AWS through SDK Once we have the necessary variables, we can connect to AWS using the official SDK for Node. Let’s install it first.npm install aws-sdk @types/aws-sdkSince we’ve got everything that we need to configure the SDK, let’s use it. One of the ways to do so is to use aws-sdk straight in our main.ts file. main.ts import { NestFactory } from '@nestjs/core'; import { AppModule } from './app.module'; import * as cookieParser from 'cookie-parser'; import { ValidationPipe } from '@nestjs/common'; import { ExcludeNullInterceptor } from './utils/excludeNull.interceptor'; import { ConfigService } from '@nestjs/config'; import { config } from 'aws-sdk'; async function bootstrap() { const app = await NestFactory.create(AppModule); app.useGlobalPipes(new ValidationPipe()); app.useGlobalInterceptors(new ExcludeNullInterceptor()); app.use(cookieParser()); const configService = app.get(ConfigService); config.update({ accessKeyId: configService.get('AWS_ACCESS_KEY_ID'), secretAccessKey: configService.get('AWS_SECRET_ACCESS_KEY'), region: configService.get('AWS_REGION'), }); await app.listen(3000); } bootstrap(); Creating our first bucket In Amazon S3 data is organized in buckets. We can have multiple buckets with different settings. Let’s open the Amazon S3 panel and create a bucket. Please note that the name of the bucket must be unique. We can set up our bucket to contain public files. All files that we upload to this bucket will be publicly available. We might use it to manage files such as avatars. The last step here is to add the name of the bucket to our environment variables. .env # ... AWS_PUBLIC_BUCKET_NAME=nestjs-series-public-bucket src/app.module.ts ConfigModule.forRoot({ validationSchema: Joi.object({ // ... AWS_PUBLIC_BUCKET_NAME: Joi.string().required(), }) }), Uploading images through our API Since we’ve got the AWS connection set up, we can proceed with uploading our files. For starters, let’s create a PublicFile entity. src/files/publicFile.entity.ts import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm'; @Entity() class PublicFile { @PrimaryGeneratedColumn() public id: number; @Column() public url: string; @Column() public key: string; } export default PublicFile;By saving the URL directly in the database, we can access the public file very quickly. The key property uniquely identifies the file in the bucket. We need it to access the file, for example, if we want to delete it. The next step is creating a service that uploads files to the bucket and saves the data about the file to our Postgres database. Since we want keys to be unique, we use the uuid library:npm install uuid @types/uuid src/files/files.service.ts import { Injectable } from '@nestjs/common'; import { InjectRepository } from '@nestjs/typeorm'; import { Repository } from 'typeorm'; import PublicFile from './publicFile.entity'; import { S3 } from 'aws-sdk'; import { ConfigService } from '@nestjs/config'; import { v4 as uuid } from 'uuid'; @Injectable() export class FilesService { constructor( @InjectRepository(PublicFile) private publicFilesRepository: Repository