nodejs_rate_limiting 12 Q&As

Node.js Rate Limiting FAQ & Answers

12 expert Node.js Rate Limiting answers researched from official documentation. Every answer cites authoritative sources you can verify.

unknown

12 questions
A

Use express-rate-limit middleware. Install: npm install express-rate-limit. Basic config: const rateLimit = require('express-rate-limit'); const limiter = rateLimit({windowMs: 15 * 60 * 1000, max: 100, message: 'Too many requests from this IP, please try again later.'}); app.use('/api/', limiter). Parameters: windowMs (time window in milliseconds, 15 minutes = 900000ms), max (maximum requests per window per IP), message (response when limit exceeded). This limits each IP to 100 requests per 15 minutes on /api/ routes. Response when exceeded: 429 Too Many Requests status with message. Apply to specific routes, not globally: Limits APIs but allows static assets.

99% confidence
A

rate-limiter-flexible is advanced rate limiting library with Redis support, memory efficiency, and flexible rules. Use over express-rate-limit when: (1) Need distributed rate limiting across multiple servers (Redis/Memcached backend), (2) Complex rate limit rules (different limits per user tier), (3) High-traffic applications (memory optimized), (4) Need multiple windows (burst + sustained limits). Features: Supports 10+ backends (Redis, MongoDB, PostgreSQL), prevents Memory Store attacks (attackers creating millions of keys), implements token bucket algorithm. Pattern: const {RateLimiterRedis} = require('rate-limiter-flexible'); const limiter = new RateLimiterRedis({storeClient: redisClient, points: 10, duration: 1}). Use express-rate-limit for: Simple apps, single-server deployments, quick setup. Use rate-limiter-flexible for: Production at scale.

99% confidence
A

Limit request body size to prevent attackers sending huge payloads that consume memory/CPU. Configure in Express: app.use(express.json({limit: '100kb'})); app.use(express.urlencoded({limit: '100kb', extended: true})). Default limit is 100kb - increase only if legitimately needed (file uploads use multipart, not JSON). Attack: Attacker sends 100MB JSON payload, server tries to parse, consumes memory, crashes or slows down. Effects all requests. Prevention: (1) Set limit based on actual needs (APIs rarely need >100kb), (2) Validate Content-Length header: if (req.headers['content-length'] > 100000) return res.status(413).send('Payload too large'), (3) Use streaming parsers for large payloads, (4) Set nginx client_max_body_size 100k. Combine with rate limiting for defense in depth.

99% confidence
A

Slowloris is 'low and slow' DDoS attack sending partial HTTP requests very slowly, keeping connections open indefinitely, exhausting server connection pool. Attack: Client sends headers byte-by-byte with long delays, never completes request, holds connection for hours. 100 connections can crash server. Node.js protection: (1) Set server timeout: server.setTimeout(30000) = 30 seconds, (2) Set headers timeout: server.headersTimeout = 20000, (3) Set request timeout: server.requestTimeout = 30000 (Node 18+), (4) Use nginx reverse proxy with client_body_timeout 10s; client_header_timeout 10s. Pattern: const server = app.listen(3000); server.setTimeout(30000); server.headersTimeout = 20000. If request not completed in timeout, Node closes connection. Also limit concurrent connections: server.maxConnections = 1000. Nginx provides first line of defense.

99% confidence
A

Use express-rate-limit with custom keyGenerator and skip functions. Pattern: const limiter = rateLimit({windowMs: 15601000, max: 100, keyGenerator: (req) => req.headers['x-forwarded-for'] || req.ip, skip: (req) => whitelist.includes(req.ip), handler: (req, res) => res.status(429).json({error: 'Rate limit exceeded', retryAfter: req.rateLimit.resetTime})}). keyGenerator: Extracts identifier (use X-Forwarded-For behind proxy, fallback to req.ip). skip: Bypass limits for trusted IPs (internal services, monitoring). handler: Custom response with retry information. Advanced: Different limits per route: app.use('/api/expensive', strictLimiter); app.use('/api/', normalLimiter). Store in Redis for distributed: store: new RedisStore({client: redis}).

99% confidence
A

Implement at BOTH layers for defense in depth. Nginx layer (first line): Use limit_req_zone for IP-based limits BEFORE requests reach Node.js. Config: limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; limit_req zone=api burst=20 nodelay. Benefits: Blocks attacks before consuming Node.js resources, protects against network-level DDoS, handles high request volumes efficiently (C vs JavaScript). Application layer (second line): Use express-rate-limit for fine-grained control - per-user limits (authenticated), per-endpoint limits (/login stricter than /status), business logic rules (premium users get higher limits). Pattern: Nginx handles burst traffic (100k req/s), passes legitimate traffic to app, app enforces user-specific limits. Nginx can't distinguish users, app can't handle massive traffic - both needed.

99% confidence
A

Use express-rate-limit middleware. Install: npm install express-rate-limit. Basic config: const rateLimit = require('express-rate-limit'); const limiter = rateLimit({windowMs: 15 * 60 * 1000, max: 100, message: 'Too many requests from this IP, please try again later.'}); app.use('/api/', limiter). Parameters: windowMs (time window in milliseconds, 15 minutes = 900000ms), max (maximum requests per window per IP), message (response when limit exceeded). This limits each IP to 100 requests per 15 minutes on /api/ routes. Response when exceeded: 429 Too Many Requests status with message. Apply to specific routes, not globally: Limits APIs but allows static assets.

99% confidence
A

rate-limiter-flexible is advanced rate limiting library with Redis support, memory efficiency, and flexible rules. Use over express-rate-limit when: (1) Need distributed rate limiting across multiple servers (Redis/Memcached backend), (2) Complex rate limit rules (different limits per user tier), (3) High-traffic applications (memory optimized), (4) Need multiple windows (burst + sustained limits). Features: Supports 10+ backends (Redis, MongoDB, PostgreSQL), prevents Memory Store attacks (attackers creating millions of keys), implements token bucket algorithm. Pattern: const {RateLimiterRedis} = require('rate-limiter-flexible'); const limiter = new RateLimiterRedis({storeClient: redisClient, points: 10, duration: 1}). Use express-rate-limit for: Simple apps, single-server deployments, quick setup. Use rate-limiter-flexible for: Production at scale.

99% confidence
A

Limit request body size to prevent attackers sending huge payloads that consume memory/CPU. Configure in Express: app.use(express.json({limit: '100kb'})); app.use(express.urlencoded({limit: '100kb', extended: true})). Default limit is 100kb - increase only if legitimately needed (file uploads use multipart, not JSON). Attack: Attacker sends 100MB JSON payload, server tries to parse, consumes memory, crashes or slows down. Effects all requests. Prevention: (1) Set limit based on actual needs (APIs rarely need >100kb), (2) Validate Content-Length header: if (req.headers['content-length'] > 100000) return res.status(413).send('Payload too large'), (3) Use streaming parsers for large payloads, (4) Set nginx client_max_body_size 100k. Combine with rate limiting for defense in depth.

99% confidence
A

Slowloris is 'low and slow' DDoS attack sending partial HTTP requests very slowly, keeping connections open indefinitely, exhausting server connection pool. Attack: Client sends headers byte-by-byte with long delays, never completes request, holds connection for hours. 100 connections can crash server. Node.js protection: (1) Set server timeout: server.setTimeout(30000) = 30 seconds, (2) Set headers timeout: server.headersTimeout = 20000, (3) Set request timeout: server.requestTimeout = 30000 (Node 18+), (4) Use nginx reverse proxy with client_body_timeout 10s; client_header_timeout 10s. Pattern: const server = app.listen(3000); server.setTimeout(30000); server.headersTimeout = 20000. If request not completed in timeout, Node closes connection. Also limit concurrent connections: server.maxConnections = 1000. Nginx provides first line of defense.

99% confidence
A

Use express-rate-limit with custom keyGenerator and skip functions. Pattern: const limiter = rateLimit({windowMs: 15601000, max: 100, keyGenerator: (req) => req.headers['x-forwarded-for'] || req.ip, skip: (req) => whitelist.includes(req.ip), handler: (req, res) => res.status(429).json({error: 'Rate limit exceeded', retryAfter: req.rateLimit.resetTime})}). keyGenerator: Extracts identifier (use X-Forwarded-For behind proxy, fallback to req.ip). skip: Bypass limits for trusted IPs (internal services, monitoring). handler: Custom response with retry information. Advanced: Different limits per route: app.use('/api/expensive', strictLimiter); app.use('/api/', normalLimiter). Store in Redis for distributed: store: new RedisStore({client: redis}).

99% confidence
A

Implement at BOTH layers for defense in depth. Nginx layer (first line): Use limit_req_zone for IP-based limits BEFORE requests reach Node.js. Config: limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; limit_req zone=api burst=20 nodelay. Benefits: Blocks attacks before consuming Node.js resources, protects against network-level DDoS, handles high request volumes efficiently (C vs JavaScript). Application layer (second line): Use express-rate-limit for fine-grained control - per-user limits (authenticated), per-endpoint limits (/login stricter than /status), business logic rules (premium users get higher limits). Pattern: Nginx handles burst traffic (100k req/s), passes legitimate traffic to app, app enforces user-specific limits. Nginx can't distinguish users, app can't handle massive traffic - both needed.

99% confidence