Welcome devs to the world of improvement and automation. As we speak, we’re diving into an thrilling undertaking through which we will probably be making a Serverless Picture Processing Pipeline with AWS providers.
The undertaking begins with creating S3 buckets for storing uploaded photographs and processed Thumbnails, and finally utilizing many providers like Lambda, API Gateway (To set off the Lambda Perform), DynamoDB (storing picture Metadata), and ultimately we’ll run this program in ECS cluster by making a Docker picture of the undertaking.
This undertaking is full of cloud providers and improvement tech stacks like Subsequent.js, and training this may additional improve your understanding of Cloud providers and the way they work together with one another. So with additional ado, let’s get began!
Word: The code and directions on this submit are for demo use and studying solely. A manufacturing atmosphere would require a tighter grip on configurations and safety.
Conditions
Earlier than we get into the undertaking, we have to make sure that we’ve the next necessities met in our system:
- An AWS Account: Since we use AWS providers for the undertaking, we want an AWS account. A configured IAM Person with required providers entry can be appreciated.
- Fundamental Understanding of AWS Providers: Since we’re coping with many AWS providers, it’s higher to have an honest understanding of them, akin to S3, which is used for storage, API gateway to set off Lambda perform, and plenty of extra.
- Node Put in: Our frontend is constructed with Subsequent.js, so having Node in your system is critical.
For Code reference, right here is the GitHub repo.
AWS Providers Setup
We are going to begin the undertaking by establishing our AWS providers. Before everything, we’ll create 2 S3 buckets, specifically sample-image-uploads-bucket
and sample-thumbnails-bucket
. The explanation for this lengthy title is that the bucket title needs to be distinctive everywhere in the AWS Workspace.
So to create the bucket, head over to the S3 dashboard and click on ‘Create Bucket’, choose ‘Normal Objective’, and provides it a reputation (sample-image-uploads-bucket) and go away the remainder of the configuration as default.
Equally, create the opposite bucket named sample-thumbnails-bucket, however on this bucket, be sure to uncheck Block Public Entry as a result of we’ll want it for our ECS Cluster.



We have to make sure that the sample-thumbnails-bucket has public learn entry, in order that ECS Frontend can show them. For that, we’ll connect the next coverage to that bucket:
{
"Model": "2012-10-17",
"Assertion": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::sample-thumbnails-bucket/*"
}
]
}
After creating buckets, let’s transfer to our Database for storing picture metadata. We are going to create a DynamoDb desk for that. Go to your DynamoDb console, click on on Create Desk, give it a reputation (image_metadata), and within the main key choose string, title it image_id
.

AWS providers will talk with one another, in order that they want a job with correct permissions. To create a job, go to the IAM dashboard, choose Function, and click on on Create Function. Below belief identification kind, choose AWS service, and beneath use case, select Lambda. Connect the next insurance policies:
- AmazonS3FullAccess
- AmazonDynamoDBFullAccess
- CloudWatchLogsFullAccess
Give this position a reputation (Lambda-Picture-Processor-Function) and reserve it.


Creating Lambda Perform
We’ve our Lambda position, buckets, and DynamoDb desk prepared, so now let’s create the Lambda perform which is able to course of the picture and make the thumbnail out of it, since we’re utilizing the Pillow library to course of the pictures, Lambda by default doesn’t present that. To repair this, we’ll add a layer within the Lambda perform. To try this, comply with the next steps:
Now go to your Lambda dashboard, click on on Create a Perform. Choose Creator from Scratch and select Python 3.9 because the runtime language, give it a reputation: image-processor, and within the Code tab, you have got the Add from Choice, choose that, select zip file, and add your Zip file of the image-processor.
Go to Configuration, and beneath the Permissions column, Edit the configuration by altering the prevailing position to the position we created Lambda-Picture-Processor-Function.


Now go to your S3 bucket (sample-image-uploads-bucket) and go to its Properties part and scroll right down to Occasion Notification, right here click on on Create Occasion Notification, give it a reputation (trigger-image-processor) and within the occasion kind, choose PUT and choose the lambda perform we created (image-processor).

Now, since Pillow doesn’t come built-in with the lambda library, we’ll do the next steps to repair that:
- Go to your Lambda perform (image-processor) and scroll right down to the Layer part, right here click on on Add Layer.

- Within the Add Layer part, choose Specify an ARN and supply this ARN
arn:aws:lambda:us-east-1:770693421928:layer:Klayers-p39-pillow:1
. Change the area accordingly; I’m utilizing us-east-1. Add the layer.

Now within the Code tab of your Lambda-Perform you’ll be having a lambda-function.py, put the next content material contained in the lambda_function.py:
import boto3
import uuid
import os
from PIL import Picture
from io import BytesIO
import datetime
s3 = boto3.shopper('s3')
dynamodb = boto3.shopper('dynamodb')
UPLOAD_BUCKET = '<YOUR_BUCKET_NAME>'
THUMBNAIL_BUCKET = '<YOUR_BUCKET_NAME>'
DDB_TABLE = 'image_metadata'
def lambda_handler(occasion, context):
report = occasion['Records'][0]
bucket = report['s3']['bucket']['name']
key = report['s3']['object']['key']
response = s3.get_object(Bucket=bucket, Key=key)
picture = Picture.open(BytesIO(response['Body'].learn()))
picture.thumbnail((200, 200))
thumbnail_buffer = BytesIO()
picture.save(thumbnail_buffer, 'JPEG')
thumbnail_buffer.search(0)
thumbnail_key = f"thumb_{key}"
s3.put_object(
Bucket=THUMBNAIL_BUCKET,
Key=thumbnail_key,
Physique=thumbnail_buffer,
ContentType='picture/jpeg'
)
image_id = str(uuid.uuid4())
original_url = f"https://{UPLOAD_BUCKET}.s3.amazonaws.com/{key}"
thumbnail_url = f"https://{THUMBNAIL_BUCKET}.s3.amazonaws.com/{thumbnail_key}"
uploaded_at = datetime.datetime.now().isoformat()
dynamodb.put_item(
TableName=DDB_TABLE,
Merchandise={
'image_id': {'S': image_id},
'original_url': {'S': original_url},
'thumbnail_url': {'S': thumbnail_url},
'uploaded_at': {'S': uploaded_at}
}
)
return {
'statusCode': 200,
'physique': f"Thumbnail created: {thumbnail_url}"
}
Now, we’ll want one other Lambda perform for API Gateway as a result of that may act because the entry level for our frontend ECS app to fetch picture information from DynamoDB.
To create the lambda perform, go to your Lambda Dashboard, click on on create perform, choose Creator from scratch and python 3.9 as runtime, give it a reputation, get-image-metadata, and within the configuration, choose the identical position that we assigned to different Lambda capabilities (Lambda-Picture-Processor-Function)

Now, within the Code part of the perform, put the next content material:
import boto3
import json
dynamodb = boto3.shopper('dynamodb')
TABLE_NAME = 'image_metadata'
def lambda_handler(occasion, context):
strive:
response = dynamodb.scan(TableName=TABLE_NAME)
photographs = []
for merchandise in response['Items']:
photographs.append({
'image_id': merchandise['image_id']['S'],
'original_url': merchandise['original_url']['S'],
'thumbnail_url': merchandise['thumbnail_url']['S'],
'uploaded_at': merchandise['uploaded_at']['S']
})
return {
'statusCode': 200,
'headers': {
"Content material-Sort": "software/json"
},
'physique': json.dumps(photographs)
}
besides Exception as e:
return {
'statusCode': 500,
'physique': f"Error: {str(e)}"
}
Creating the API Gateway
The API Gateway will act because the entry level on your ECS Frontend software to fetch picture information from DynamoDB. It’s going to connect with the Lambda perform that queries DynamoDB and returns the picture metadata. The URL of the Gateway is utilized in our Frontend app to show photographs. To create the API Gateway, do the next steps:
- Go to the AWS Administration Console → Seek for API Gateway → Click on Create API.
- Choose HTTP API.
- Click on on Construct.
- API title: image-gallery-api
- Add integrations: Choose Lambda and choose the get_image_metadata perform
- Choose Technique: Get and Path: /photographs
- Endpoint kind: Regional
- Click on on Subsequent and create the API Gateway URL.

Earlier than creating the Frontend, let’s take a look at the appliance manually. First go to your Add S3 Bucket (sample-image-uploads-bucket) and add a jpg/jpeg picture; different picture is not going to work as your perform solely processes these two sorts:
Within the Image above, I’ve uploaded a picture titled “ghibil-art.jpg” file, and as soon as uploaded, it should set off the Lambda perform, that may create the thumbnail out of it named as “thumbnail-ghibil-art.jpg” and retailer it in sample-thumbnails-bucket and the details about the picture will probably be saved in image-metadata desk in DynamoDb.


Within the picture above, you’ll be able to see the Merchandise contained in the Discover Merchandise part of our DynamoDb desk “image-metadata.” To check the API-Gateway, we’ll test the Invoke URL of our image-gallery-API adopted by /photographs. It’s going to present the next output, with the curl command:
Now our software is working effective, we are able to deploy a frontend to visualise the undertaking.
Creating the Frontend App
For the sake of Simplicity, we will probably be making a minimal, easy gallery frontend utilizing Subsequent.js, Dockerize it, and deploy it on ECS. To create the app, do the next steps:
Initialization
npx create-next-app@newest image-gallery
cd image-gallery
npm set up
npm set up axios
Create the Gallery Part
Create a brand new file elements/Gallery.js:
'use shopper';
import { useState, useEffect } from 'react';
import axios from 'axios';
import kinds from './Gallery.module.css';
const Gallery = () => {
const [images, setImages] = useState([]);
const [loading, setLoading] = useState(true);
useEffect(() => {
const fetchImages = async () => {
strive {
const response = await axios.get('https://<YOUR_API_GATEWAY_INVOKE_URL>/photographs');
setImages(response.information);
setLoading(false);
} catch (error) {
console.error('Error fetching photographs:', error);
setLoading(false);
}
};
fetchImages();
}, []);
if (loading) {
return <div className={kinds.loading}>Loading...</div>;
}
return (
<div className={kinds.gallery}>
{photographs.map((picture) => (
<div key={picture.image_id} className={kinds.imageCard}>
<img
src={picture.thumbnail_url}
alt="Gallery thumbnail"
width={200}
top={150}
className={kinds.thumbnail}
/>
<p className={kinds.date}>
{new Date(picture.uploaded_at).toLocaleDateString()}
</p>
</div>
))}
</div>
);
};
export default Gallery;
Make Certain to Change the Gateway-URL to your API_GATEWAY_URL
Add CSS Module
Create elements/Gallery.module.css:
.gallery {
show: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
hole: 20px;
padding: 20px;
max-width: 1200px;
margin: 0 auto;
}
.imageCard {
background: #fff;
border-radius: 8px;
box-shadow: 0 2px 5px rgba(0,0,0,0.1);
overflow: hidden;
transition: rework 0.2s;
}
.imageCard:hover {
rework: scale(1.05);
}
.thumbnail {
width: 100%;
top: 150px;
object-fit: cowl;
}
.date {
text-align: heart;
padding: 10px;
margin: 0;
font-size: 0.9em;
shade: #666;
}
.loading {
text-align: heart;
padding: 50px;
font-size: 1.2em;
}
Replace the Dwelling Web page
Modify app/web page.js:
import Gallery from '../elements/Gallery';
export default perform Dwelling() {
return (
<principal>
<h1 type={{ textAlign: 'heart', padding: '20px' }}>Picture Gallery</h1>
<Gallery />
</principal>
);
}
Subsequent.js’s built-in Picture part
To make use of Subsequent.js’s built-in Picture part for higher optimization, replace subsequent.config.mjs:
const nextConfig = {
photographs: {
domains: ['sample-thumbnails-bucket.s3.amazonaws.com'],
},
};
export default nextConfig;
Run the Utility
Go to http://localhost:3000 in your browser, and you will note the appliance operating with all of the thumbnails uploaded.
For demonstration functions, I’ve put 4 photographs (jpeg/jpg) in my sample-images-upload-bucket. By the perform, they’re remodeled into thumbnails and saved within the sample-thumbnail-bucket.



The applying appears like this:

Containerising and Creating the ECS Cluster
Now we’re virtually performed with the undertaking, so we’ll proceed by making a Dockerfile of the undertaking as follows:
# Use the official Node.js picture as a base
FROM node:18-alpine AS builder
# Set working listing
WORKDIR /app
# Copy package deal information and set up dependencies
COPY package deal.json package-lock.json ./
RUN npm set up
# Copy the remainder of the appliance code
COPY . .
# Construct the Subsequent.js app
RUN npm run construct
# Use a light-weight Node.js picture for manufacturing
FROM node:18-alpine
# Set working listing
WORKDIR /app
# Copy constructed information from the builder stage
COPY --from=builder /app ./
# Expose port
EXPOSE 3000
# Run the appliance
CMD ["npm", "start"]
Now we’ll construct the Docker picture utilizing:
docker construct -t sample-nextjs-app .

Now that we’ve our Docker picture, we’ll push it to AWS ECR repo, for that, do the next steps:
Step 1: Push the Docker Picture to Amazon ECR
- Go to the AWS Administration Console → Seek for ECR (Elastic Container Registry) → Open ECR.
- Create a brand new repository:
- Click on Create repository.
- Set Repository title (e.g., sample-nextjs-app).
- Select Personal (or Public if required).
- Click on Create repository.
- Push your Docker picture to ECR:
- Within the newly created repository, click on View push instructions.
- Observe the instructions to:
- Authenticate Docker with ECR.
- Construct, tag, and push your picture.
- It is advisable to have AWS CLI configured for this step.


Step 2: Create an ECS Cluster
aws ecs create-cluster --cluster-name sample-ecs-cluster
Step 3: Create a Process Definition
- Within the ECS Console, go to Process Definitions.
- Click on Create new Process Definition.
- Select Fargate → Click on Subsequent step.
- Set process definition particulars:
- Identify: sample-nextjs-task
- Process position: ecsTaskExecutionRole (Create one if lacking).
{
"Model": "2012-10-17",
"Assertion": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability"
],
"Useful resource": "arn:aws:ecr:us-east-1:624448302051:repository/sample-nextjs-app"
}
]
}
- Process reminiscence & CPU: Select acceptable values (e.g., 512MB & 256 CPU).
- Outline the container:
- Click on Add container.
- Container title: sample-nextjs-container.
- Picture URL: Paste the ECR picture URI from Step 1.
- Port mappings: Set 3000 for each container and host ports.
- Click on Add.
- Click on Create.
Step 4: Create an ECS Service
- Go to “ECS” → Click on Clusters → Choose your cluster (sample-ecs-cluster).
- Click on Create Service.
- Select Fargate → Click on Subsequent step.
- Arrange the service:
- Process definition: Choose sample-nextjs-task.
- Cluster: sample-ecs-cluster.
- Service title: sample-nextjs-service.
- Variety of duties: 1 (Can scale later).
- Networking settings:
- Choose an current VPC.
- Select Public subnets.
- Allow Auto-assign Public IP.
- Click on Subsequent step → Create service.


Step 5: Entry the Utility
- Go to ECS > Clusters > sample-ecs-cluster.
- Click on on the Duties tab.
- Click on on the operating process.
- Discover the Public IP beneath Community.
Open a browser and go to:
http://<TASK_PUBLIC_IP>:3000
Your Subsequent.js app must be stay! 🚀

Conclusion
This marks the tip of the weblog. As we speak, we divided into many AWS providers: s3, IAM, ECR, Lambda perform, ECS, Fargate, and API Gateway. We began the undertaking by creating s3 buckets and finally deployed our software in an ECS cluster.
All through this information, we coated containerizing the Subsequent.js app, pushing it to ECR, configuring ECS process definitions, and deploying by way of the AWS console. This setup permits for automated scaling, straightforward updates, and safe API entry—all key advantages of a cloud-native deployment.
Potential manufacturing configurations could embrace modifications like beneath:
- Implementing extra restrictive IAM permissions, enhancing management over public entry to S3 buckets (utilizing CloudFront, pre-signed URLs, or a backend proxy as an alternative of constructing the sample-thumbnails-bucket public)
- Including error dealing with and pagination (particularly for DynamoDB queries)
- Using safe VPC/community configurations for ECS (like utilizing an Utility Load Balancer and personal subnets as an alternative of direct public IPs)
- Addressing scaling issues by changing the DynamoDB.scan operation throughout the metadata-fetching Lambda with the DynamoDB.question
- Utilizing atmosphere variables as an alternative of a hardcoded API gateway URL within the Subsequent.js code