nodejs_performance_bottlenecks 12 Q&As

Node.js Performance Bottlenecks FAQ & Answers

12 expert Node.js Performance Bottlenecks answers researched from official documentation. Every answer cites authoritative sources you can verify.

unknown

12 questions
A

5 common causes: (1) Timers not cleared: setTimeout/setInterval without clearTimeout/clearInterval. Pattern: const timer = setTimeout(fn, 1000); /* forgot clearTimeout(timer) */. (2) Event listeners not removed: event.on() without event.off(). Listeners accumulate, hold references. (3) Global variables: Persist for application lifetime, never garbage collected. Avoid: global.cache = {}. (4) Closures holding references: Inner functions retain outer scope variables. Pattern: const heavy = loadHugeData(); setInterval(() => console.log(heavy.length), 1000) - holds heavy forever. (5) Caches without limits: Map/Object growing unbounded. Use LRU cache with max size. Detection: Chrome DevTools heap snapshots, node --inspect, clinic.js doctor. Monitor: process.memoryUsage().heapUsed over time.

99% confidence
A

Remove event listeners when done using off() or removeListener(). Pattern: const handler = (data) => console.log(data); emitter.on('data', handler); /* later */ emitter.off('data', handler). Use once() for single-use listeners (auto-removes): emitter.once('connect', () => console.log('Connected')). Problem: Each on() call adds listener, never removed, accumulates over time. Example: WebSocket reconnect loop adding listener each time. MaxListenersExceededWarning: EventEmitter warns when >10 listeners (potential leak). Increase limit only if intentional: emitter.setMaxListeners(20). Best practice: Clean up in cleanup/destroy methods: class Service {init() {this.db.on('error', this.handleError)}; destroy() {this.db.off('error', this.handleError)}}. Use AbortController for complex cleanup: const controller = new AbortController(); signal: controller.signal option.

99% confidence
A

5 blocking operations: (1) Synchronous file I/O: fs.readFileSync - blocks until read completes. Use async: fs.readFile or fs.promises.readFile. (2) Heavy computations: Large for loops, complex regex, JSON.parse on huge strings. Offload to Worker Threads. (3) Long synchronous functions: crypto.pbkdf2Sync blocks (use async pbkdf2). (4) Blocking libraries: Some npm packages use sync operations. Check source. (5) Large JSON.stringify/parse: 50MB+ JSON blocks for seconds. Stream or chunk instead. Detection: Monitor event loop lag: const start = Date.now(); setImmediate(() => {const lag = Date.now() - start; if (lag > 100) console.warn(Event loop lag: ${lag}ms)}). Production: Use prom-client eventLoopLag metric. Target: <10ms lag under normal load.

99% confidence
A

Launch Node with inspector: node --inspect app.js. Open chrome://inspect, click 'inspect' on your process. Steps: (1) Take heap snapshot (Memory tab → Profiles → Take snapshot), (2) Perform suspected leak action (trigger endpoint, run operation), (3) Take second snapshot, (4) Compare snapshots (select snapshot → Comparison dropdown). Look for: Objects growing in count between snapshots, retained size increasing, unexpected objects surviving. Filter by Constructor: Array, Object, Closure to find leaks. Click object → see retainer tree (what's keeping it alive). Common findings: Detached DOM nodes (frontend), unclosed database connections (backend), event emitters with listeners, timers not cleared. Fix: Free references, clear timers, remove listeners. Alternative: clinic.js doctor for production-safe profiling without inspector.

99% confidence
A

YES, use WeakMap/WeakSet when cache keys are objects and should be garbage collected when no longer referenced. WeakMap: Keys must be objects, automatically removes entries when key is GC'd. Pattern: const cache = new WeakMap(); function getData(obj) {if (cache.has(obj)) return cache.get(obj); const data = expensiveComputation(obj); cache.set(obj, data); return data;}. When obj is no longer referenced anywhere, WeakMap entry is automatically removed. Regular Map: Holds strong reference, prevents garbage collection even if key unused elsewhere. Use WeakMap for: Storing metadata about objects you don't control, caching based on object identity, private data pattern. Don't use for: Primitive keys (strings, numbers - use regular Map with LRU eviction), when you need iteration (WeakMap not iterable), when you need size() method.

99% confidence
A

PM2 can auto-restart when memory threshold exceeded. Start with limit: pm2 start app.js --max-memory-restart 500M (restarts if heap exceeds 500MB). Check config: pm2 show app. Monitor: pm2 monit (real-time memory/CPU). Pattern in ecosystem.config.js: module.exports = {apps: [{name: 'api', script: 'server.js', max_memory_restart: '500M', instances: 4, exec_mode: 'cluster'}]}. Use: pm2 start ecosystem.config.js. Memory threshold: Set to 70-80% of available memory (e.g., 500M on 1GB container). PM2 gracefully restarts (waits for connections to finish). Benefits: Prevents OOM crashes, automatic recovery, zero downtime (cluster mode restarts one instance at a time). Not a fix: Find and fix memory leaks properly using profiling. Max-memory-restart is safety net, not solution.

99% confidence
A

5 common causes: (1) Timers not cleared: setTimeout/setInterval without clearTimeout/clearInterval. Pattern: const timer = setTimeout(fn, 1000); /* forgot clearTimeout(timer) */. (2) Event listeners not removed: event.on() without event.off(). Listeners accumulate, hold references. (3) Global variables: Persist for application lifetime, never garbage collected. Avoid: global.cache = {}. (4) Closures holding references: Inner functions retain outer scope variables. Pattern: const heavy = loadHugeData(); setInterval(() => console.log(heavy.length), 1000) - holds heavy forever. (5) Caches without limits: Map/Object growing unbounded. Use LRU cache with max size. Detection: Chrome DevTools heap snapshots, node --inspect, clinic.js doctor. Monitor: process.memoryUsage().heapUsed over time.

99% confidence
A

Remove event listeners when done using off() or removeListener(). Pattern: const handler = (data) => console.log(data); emitter.on('data', handler); /* later */ emitter.off('data', handler). Use once() for single-use listeners (auto-removes): emitter.once('connect', () => console.log('Connected')). Problem: Each on() call adds listener, never removed, accumulates over time. Example: WebSocket reconnect loop adding listener each time. MaxListenersExceededWarning: EventEmitter warns when >10 listeners (potential leak). Increase limit only if intentional: emitter.setMaxListeners(20). Best practice: Clean up in cleanup/destroy methods: class Service {init() {this.db.on('error', this.handleError)}; destroy() {this.db.off('error', this.handleError)}}. Use AbortController for complex cleanup: const controller = new AbortController(); signal: controller.signal option.

99% confidence
A

5 blocking operations: (1) Synchronous file I/O: fs.readFileSync - blocks until read completes. Use async: fs.readFile or fs.promises.readFile. (2) Heavy computations: Large for loops, complex regex, JSON.parse on huge strings. Offload to Worker Threads. (3) Long synchronous functions: crypto.pbkdf2Sync blocks (use async pbkdf2). (4) Blocking libraries: Some npm packages use sync operations. Check source. (5) Large JSON.stringify/parse: 50MB+ JSON blocks for seconds. Stream or chunk instead. Detection: Monitor event loop lag: const start = Date.now(); setImmediate(() => {const lag = Date.now() - start; if (lag > 100) console.warn(Event loop lag: ${lag}ms)}). Production: Use prom-client eventLoopLag metric. Target: <10ms lag under normal load.

99% confidence
A

Launch Node with inspector: node --inspect app.js. Open chrome://inspect, click 'inspect' on your process. Steps: (1) Take heap snapshot (Memory tab → Profiles → Take snapshot), (2) Perform suspected leak action (trigger endpoint, run operation), (3) Take second snapshot, (4) Compare snapshots (select snapshot → Comparison dropdown). Look for: Objects growing in count between snapshots, retained size increasing, unexpected objects surviving. Filter by Constructor: Array, Object, Closure to find leaks. Click object → see retainer tree (what's keeping it alive). Common findings: Detached DOM nodes (frontend), unclosed database connections (backend), event emitters with listeners, timers not cleared. Fix: Free references, clear timers, remove listeners. Alternative: clinic.js doctor for production-safe profiling without inspector.

99% confidence
A

YES, use WeakMap/WeakSet when cache keys are objects and should be garbage collected when no longer referenced. WeakMap: Keys must be objects, automatically removes entries when key is GC'd. Pattern: const cache = new WeakMap(); function getData(obj) {if (cache.has(obj)) return cache.get(obj); const data = expensiveComputation(obj); cache.set(obj, data); return data;}. When obj is no longer referenced anywhere, WeakMap entry is automatically removed. Regular Map: Holds strong reference, prevents garbage collection even if key unused elsewhere. Use WeakMap for: Storing metadata about objects you don't control, caching based on object identity, private data pattern. Don't use for: Primitive keys (strings, numbers - use regular Map with LRU eviction), when you need iteration (WeakMap not iterable), when you need size() method.

99% confidence
A

PM2 can auto-restart when memory threshold exceeded. Start with limit: pm2 start app.js --max-memory-restart 500M (restarts if heap exceeds 500MB). Check config: pm2 show app. Monitor: pm2 monit (real-time memory/CPU). Pattern in ecosystem.config.js: module.exports = {apps: [{name: 'api', script: 'server.js', max_memory_restart: '500M', instances: 4, exec_mode: 'cluster'}]}. Use: pm2 start ecosystem.config.js. Memory threshold: Set to 70-80% of available memory (e.g., 500M on 1GB container). PM2 gracefully restarts (waits for connections to finish). Benefits: Prevents OOM crashes, automatic recovery, zero downtime (cluster mode restarts one instance at a time). Not a fix: Find and fix memory leaks properly using profiling. Max-memory-restart is safety net, not solution.

99% confidence