Before we dive in, why not take a quick peek at my portfolio: https://priyalraj.com, & while you’re at it, check out my SaaS venture: https://shavelinks.com, it’s where innovation meets simplicity.
Imagine a nightclub on a Friday night — people are lining up, music’s pumping, and everyone wants in. But there’s a bouncer at the door. Their job? Let in a manageable number of guests, keep troublemakers out, and make sure the club doesn’t get overcrowded.
Now, think of your API as that nightclub. Without a rate limiter — your digital bouncer — anyone can bombard your server with endless requests. This can crash your backend, drain your resources, or open the door to abuse and attacks.
A rate limiter enforces a rule like “Only 100 requests allowed per minute.” It watches each incoming request and decides whether to let it through or block it. It’s a simple idea, but it has a huge impact.
Here’s why you need a rate limiter in your APIs:
Stop attackers from hammering your login endpoint 1000x per second. Services like DataDomerely on this to protect against bots and credential stuffing.
When traffic surges, rate limiting absorbs the shock. Platforms like Medium use it to keep their APIs stable, even when an article goes viral.
Slow APIs = angry users. Cloudflare uses rate limiting to keep its infrastructure speedy and error-free during traffic storms.
Every request costs something — rate limiting cuts waste, especially on platforms like Vercel or AWS Lambda, where every millisecond matters.
[User Request] --> [Rate Limiter Check]
| |
|-- Allowed --> [Process Request] --> [Response]
|
|-- Blocked --> [429 Too Many Requests Response]
We’re going to build a Fixed Window Counter style rate limiter in Next.js, designed specifically to run inside API routes.
Instead of wiring the logic directly into each route, we’ll create a reusable utility file: 📁 lib/rateLimiter.ts
This approach keeps your code modular, clean, and scalable, making it easy to apply rate limiting across multiple endpoints without repetition.
✅ Step 1: Create rateLimiter.ts
Inside lib/
import { NextRequest, NextResponse } from 'next/server';
interface RateLimitConfig {
maxRequests: number;
windowMs: number;
message?: string;
}
interface RateLimitRecord {
count: number;
resetTime: number;
}
// Global store for rate limits
const rateLimitStore = new Map<string, RateLimitRecord>();
const createRateLimiter = (config: RateLimitConfig) => {
const finalConfig = {
message: 'Too many requests from this IP, please try again later.',
...config
};
const getClientIp = (request: NextRequest): string => {
const forwardedFor = request.headers.get('x-forwarded-for');
if (forwardedFor) {
return forwardedFor.split(',')[0].trim();
}
return request.ip || 'unknown';
};
const isRateLimited = (ip: string): boolean => {
const now = Date.now();
const record = rateLimitStore.get(ip);
// Clean up expired records
rateLimitStore.forEach((value, key) => {
if (now > value.resetTime) {
rateLimitStore.delete(key);
}
});
if (!record || now > record.resetTime) {
// Reset or create new record
rateLimitStore.set(ip, {
count: 1,
resetTime: now + finalConfig.windowMs
});
return false;
}
// Increment count
record.count++;
rateLimitStore.set(ip, record);
return record.count > finalConfig.maxRequests;
};
const check = (request: NextRequest): { isLimited: boolean; response?: NextResponse } => {
const ip = getClientIp(request);
if (isRateLimited(ip)) {
return {
isLimited: true,
response: NextResponse.json(
{ message: finalConfig.message, status: 429 },
{ status: 429 }
)
};
}
return { isLimited: false };
};
const clear = (): void => {
rateLimitStore.clear();
};
return {
check,
clear
};
};
// Create rate limiters with specific request counts per minute
export const createRequestsMinuteLimiter = (requestsPerMinute: number, windowMs: number = 60 * 1000) =>
createRateLimiter({
maxRequests: requestsPerMinute,
windowMs,
message: `Too many requests. Please try again in ${windowMs / 1000} seconds.`
});
// Export the createRateLimiter function for custom configurations
export { createRateLimiter };
✅ Step 2: Use It Inside Your API Route
Let’s say you want to limit requests to 10 per minute on a simple API route. In this case, you’ll configure the rate limiter with the default settings. The default rate limit is 10 requests per minute, and the default time window is 1 minute (60,000 ms). Example: Limit to 10 Requests Per Minute.
// app/api/my-api/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { createRequestsMinuteLimiter } from '@/lib/rateLimiter';
// Limit: 10 requests per minute (default window: 60,000ms)
const limiter = createRequestsMinuteLimiter(10);
export async function GET(request: NextRequest) {
const { isLimited, response } = limiter.check(request);
if (isLimited) return response!;
return NextResponse.json({ message: 'Success ✅' });
}
createRequestsMinuteLimiter(10)
: This function call sets the rate limit to 10 requests per minute. The default time window of 1 minute (60,000 ms) is applied automatically.limiter.check(request)
: This method checks if the incoming request exceeds the rate limit. If the limit is exceeded, it returns a 429 Too Many Requests
response; otherwise, it allows the request to proceed.This setup is ideal for simple, low-traffic API routes where you want to impose a basic rate limit without any additional complexities.
If you want more flexibility, you can customise the time window for your rate limiter. Instead of using the default 1-minute window, you can set a custom time window that suits your needs. Example: Limit to 10 Requests Per 30 Seconds.
// app/api/my-api/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { createRequestsMinuteLimiter } from '@/lib/rateLimiter';
// Limit: 10 requests per 30 seconds
const limiter = createRequestsMinuteLimiter(10, 30000);
export async function GET(request: NextRequest) {
const { isLimited, response } = limiter.check(request);
if (isLimited) return response!;
return NextResponse.json({ message: 'Success ✅' });
}
createRequestsMinuteLimiter(10, 30000)
: Here, the rate limiter is set to allow 10 requests per 30 seconds (30,000 ms). The second parameter, 30,000, defines the custom window duration.limiter.check(request)
: This method checks if the incoming request exceeds the custom rate limit of 10 requests per 30 seconds. If the limit is exceeded, it responds with a 429 Too Many Requests
message; otherwise, it allows the request to proceed.This custom window configuration is helpful when you need a shorter or longer window to better match the behaviour of your API or the expected traffic patterns.
Serverless platforms like Vercel spin up stateless functions to handle each request — they don’t share memory between requests. Since our rate limiter uses an in-memory Map
, every new request may hit a fresh instance with a fresh Map. So, the count resets every time, and the limiter gets bypassed.
If you’re deploying to serverless, replace the Map
with a shared store like:
These stores maintain request state across invocations, making your rate limiter production-safe.
When you’re building your custom rate limiter in Next.js, there are various real-world use cases where your custom rate limiting logic can be a lifesaver:
/login
or /register
), it's crucial to prevent brute-force attacks. By applying your custom rate limiter, you can limit the number of login attempts per user or IP, significantly reducing the risk of credential stuffing./search
or /checkout
, where users might be making frequent requests, rate limiting is essential to ensure the API doesn't get overwhelmed. With your custom rate limiter, you can easily adjust limits for these high-traffic routes.After implementing your custom rate limiter, it’s critical to test and log its behaviour to ensure it performs as expected:
To further enhance your custom rate limiter, consider adding these optional features:
Retry-After
header.Building your custom rate limiter in Next.js is an excellent way to gain full control over how traffic interacts with your APIs. With a modular and configurable setup, you can scale your solution to handle different traffic patterns and user behaviours.
As you continue to refine your rate limiter, always remember that the goal is to balance protecting your resources and providing a smooth experience for legitimate users. By applying the techniques and configurations discussed here, you’ll ensure your APIs stay rock-solid, even under heavy traffic.
If you enjoyed this article, please make sure to Like, Comment and follow me on Twitter.