Back to Blog

Node.js Memory Leaks: How to Find and Fix the Leak That Is Taking Down Your Server

Memory leaks can bring down even the most carefully architected Node.js applications. Learn how to detect, diagnose, and fix memory leaks using heap snapshots, profiling tools, and APM platforms—with real-world examples and prevention strategies.

Published

15 min read

Reading time

Node.js Memory Leaks: How to Find and Fix the Leak That Is Taking Down Your Server

You've launched your Node.js application. It runs smoothly for the first few hours—fast, responsive, handling traffic like a champ. Then, slowly, response times creep up. Memory usage climbs. After 12 hours, your app is using 2GB instead of 200MB. After 24 hours, it crashes with JavaScript heap out of memory. You restart it, and the cycle repeats.

Welcome to the world of memory leaks.

Memory leaks in Node.js are insidious. Unlike languages with manual memory management, where leaks are often obvious, JavaScript's garbage collector is supposed to handle cleanup automatically. But when you accidentally keep references to objects you no longer need, those objects never get collected, memory usage grows unbounded, and eventually your application dies.

The good news? With the right tools and techniques, memory leaks are entirely preventable and diagnosable. This guide shows you how to find, fix, and prevent memory leaks in Node.js applications using heap snapshots, profiling tools, and modern APM platforms.

Understanding Memory in Node.js

Node.js runs on V8, Chrome's JavaScript engine. V8 uses an automatic garbage collector that periodically frees memory occupied by unreachable objects. But garbage collection only works when there are no references to an object.

How Node.js Memory Works

graph TD
    A[Node.js Process] --> B[Heap Memory]
    A --> C[Stack Memory]
    A --> D[Native Memory]

    B --> B1[Old Space<br/>~1.4GB limit]
    B --> B2[New Space<br/>Young objects]
    B --> B3[Large Object Space]
    B --> B4[Code Space]

    C --> C1[Function Calls]
    C --> C2[Local Variables]

    D --> D1[Buffers]
    D --> D2[Native Addons]

    style B1 fill:#ffccbc
    style B2 fill:#c5e1a5
    style D1 fill:#bbdefb

Heap Memory: Where objects, strings, and closures live. Limited to ~1.4GB on 64-bit systems (can be increased with --max-old-space-size).

Stack Memory: Function call stack and local variables. Very limited (~1MB).

Native Memory: Buffers, external resources, native addons. Not subject to V8 heap limits.

Common Causes of Memory Leaks

Cause Example Impact
Global variables Accidentally creating globals Prevents GC forever
Event listeners Not removing listeners Grows with each registration
Timer functions setInterval not cleared Closures retained indefinitely
Cache without limits Unbounded in-memory cache Grows forever
Closure scope Retaining large objects in closures Prevents GC of captured vars
Streams not closed File/network streams left open Native memory leak
Large objects in arrays Pushing without bound Array grows indefinitely

Detecting Memory Leaks

1. Recognizing the Symptoms

Gradual Memory Growth

# Monitor memory usage over time
node app.js &
PID=$!

while true; do
  ps -o pid,rss,vsz,command -p $PID
  sleep 60
done

# Output showing leak:
# PID    RSS     VSZ    COMMAND
# 1234   180000  2500000 node app.js
# 1234   245000  2650000 node app.js  # After 1 hour
# 1234   389000  2900000 node app.js  # After 2 hours
# 1234   512000  3200000 node app.js  # After 3 hours - LEAK!

Application Metrics

// memory-monitor.ts
import v8 from 'v8';
import { performance } from 'perf_hooks';

interface MemoryMetrics {
  timestamp: number;
  heapUsed: number;
  heapTotal: number;
  external: number;
  arrayBuffers: number;
  rss: number;
  heapLimit: number;
}

export function getMemoryMetrics(): MemoryMetrics {
  const memUsage = process.memoryUsage();
  const heapStats = v8.getHeapStatistics();

  return {
    timestamp: Date.now(),
    heapUsed: memUsage.heapUsed,
    heapTotal: memUsage.heapTotal,
    external: memUsage.external,
    arrayBuffers: memUsage.arrayBuffers,
    rss: memUsage.rss,
    heapLimit: heapStats.heap_size_limit,
  };
}

// Monitor and alert
export function startMemoryMonitoring(intervalMs: number = 60000) {
  const baseline = getMemoryMetrics();

  setInterval(() => {
    const current = getMemoryMetrics();
    const heapGrowthPercent = ((current.heapUsed - baseline.heapUsed) / baseline.heapUsed) * 100;

    console.log(`Heap growth: ${heapGrowthPercent.toFixed(2)}% (${(current.heapUsed / 1024 / 1024).toFixed(2)}MB)`);

    // Alert if growth exceeds 50%
    if (heapGrowthPercent > 50) {
      console.error('⚠️  WARNING: Possible memory leak detected!');
      console.error(`Heap has grown ${heapGrowthPercent.toFixed(2)}% from baseline`);
    }

    // Alert if approaching heap limit
    const heapUsagePercent = (current.heapUsed / current.heapLimit) * 100;
    if (heapUsagePercent > 80) {
      console.error('🚨 CRITICAL: Heap usage at ${heapUsagePercent.toFixed(2)}% of limit!');
    }
  }, intervalMs);
}

// Usage
startMemoryMonitoring(30000); // Check every 30 seconds

2. Taking Heap Snapshots

Heap snapshots capture all objects in memory at a specific point in time. By comparing snapshots, you can identify which objects are accumulating.

Taking Snapshots Programmatically

// heap-snapshot.ts
import v8 from 'v8';
import fs from 'fs';
import path from 'path';

export function takeHeapSnapshot(label: string = 'snapshot'): string {
  const filename = `heapsnapshot-${label}-${Date.now()}.heapsnapshot`;
  const filepath = path.join('/tmp', filename);

  const snapshot = v8.writeHeapSnapshot(filepath);
  console.log(`Heap snapshot written to: ${snapshot}`);

  return snapshot;
}

// Usage: Take snapshots at strategic points
takeHeapSnapshot('startup');

// ... after 1 hour
takeHeapSnapshot('after-1hour');

// ... after heavy usage
takeHeapSnapshot('after-load');

Using Chrome DevTools to Analyze Snapshots

# Start Node.js with inspector
node --inspect app.js

# Or attach to running process
kill -SIGUSR1 <PID>

# Then:
# 1. Open chrome://inspect in Chrome
# 2. Click "inspect" on your Node process
# 3. Go to Memory tab
# 4. Take heap snapshots
# 5. Compare snapshots to find growing objects

Automated Snapshot Comparison

// snapshot-analyzer.ts
import fs from 'fs';

interface HeapSnapshot {
  snapshot: {
    meta: any;
    node_count: number;
    edge_count: number;
  };
  nodes: number[];
  edges: number[];
  strings: string[];
}

export function analyzeHeapGrowth(snapshot1Path: string, snapshot2Path: string): void {
  const snap1: HeapSnapshot = JSON.parse(fs.readFileSync(snapshot1Path, 'utf-8'));
  const snap2: HeapSnapshot = JSON.parse(fs.readFileSync(snapshot2Path, 'utf-8'));

  console.log('\n=== Heap Growth Analysis ===');
  console.log(`Snapshot 1 nodes: ${snap1.snapshot.node_count}`);
  console.log(`Snapshot 2 nodes: ${snap2.snapshot.node_count}`);
  console.log(`Growth: ${snap2.snapshot.node_count - snap1.snapshot.node_count} objects`);

  // Analyze string growth (common leak source)
  const stringGrowth = snap2.strings.length - snap1.strings.length;
  console.log(`\nString growth: ${stringGrowth} strings`);

  if (stringGrowth > 10000) {
    console.error('⚠️  Significant string growth detected - possible leak!');
  }
}

3. Using Memory Profilers

Clinic.js

# Install clinic
npm install -g clinic

# Profile your application
clinic doctor -- node app.js

# Generate heap profiler report
clinic heapprofiler -- node app.js

# Open the HTML report
# Look for:
# - Continuously growing heap
# - Sawtooth pattern (good - GC working)
# - Flat growth (bad - likely leak)

Node.js Built-in Profiler

# Generate CPU and heap profiles
node --prof --heap-prof app.js

# After stopping the app, process the profile
node --prof-process isolate-0xNNNNNNNNNNNN-v8.log > processed.txt

4. Using APM Tools

Example: Integration with Datadog APM

// datadog-apm.ts
import tracer from 'dd-trace';

// Initialize Datadog tracer
tracer.init({
  service: 'my-nodejs-app',
  env: 'production',
  profiling: true, // Enable continuous profiling
  runtimeMetrics: true, // Collect heap metrics
});

// Datadog will automatically collect:
// - Heap size
// - Heap used
// - GC pause times
// - Object allocations

Custom Memory Metrics

// custom-metrics.ts
import { StatsD } from 'hot-shots';

const statsd = new StatsD({
  host: 'statsd.example.com',
  port: 8125,
  prefix: 'nodejs.app.',
});

// Report memory metrics
setInterval(() => {
  const mem = process.memoryUsage();

  statsd.gauge('memory.heap_used', mem.heapUsed);
  statsd.gauge('memory.heap_total', mem.heapTotal);
  statsd.gauge('memory.rss', mem.rss);
  statsd.gauge('memory.external', mem.external);
}, 10000);

Common Memory Leak Patterns and Fixes

Pattern 1: Event Listener Accumulation

The Leak:

// ❌ BAD: Event listeners accumulate
import { EventEmitter } from 'events';

class UserSession extends EventEmitter {
  constructor(private userId: string) {
    super();
    this.setupListeners();
  }

  setupListeners() {
    // Global event bus
    globalEventBus.on('user:update', (data) => {
      if (data.userId === this.userId) {
        this.emit('updated', data);
      }
    });
  }
}

// Each new session adds a listener but never removes it!
function handleConnection(userId: string) {
  const session = new UserSession(userId);
  // When session ends, listener remains...
}

The Fix:

// ✅ GOOD: Properly remove event listeners
class UserSession extends EventEmitter {
  private updateHandler: (data: any) => void;

  constructor(private userId: string) {
    super();
    this.updateHandler = this.handleUpdate.bind(this);
    this.setupListeners();
  }

  setupListeners() {
    globalEventBus.on('user:update', this.updateHandler);
  }

  private handleUpdate(data: any) {
    if (data.userId === this.userId) {
      this.emit('updated', data);
    }
  }

  destroy() {
    // Clean up listener
    globalEventBus.removeListener('user:update', this.updateHandler);
    this.removeAllListeners();
  }
}

function handleConnection(userId: string) {
  const session = new UserSession(userId);

  // Clean up on disconnect
  connection.on('close', () => {
    session.destroy();
  });
}

Pattern 2: Closures Capturing Large Contexts

The Leak:

// ❌ BAD: Closure captures huge object
function processUsers(users: User[]) {
  // Large array (potentially millions of users)
  const allUsers = users;

  return users.map((user) => {
    // This closure captures the ENTIRE allUsers array
    return {
      id: user.id,
      getName: () => {
        // Even though we only need user.name,
        // the entire allUsers array is kept alive
        return user.name;
      },
    };
  });
}

The Fix:

// ✅ GOOD: Minimize closure scope
function processUsers(users: User[]) {
  return users.map((user) => {
    // Capture only what's needed
    const userName = user.name;
    const userId = user.id;

    return {
      id: userId,
      getName: () => userName, // No large object captured
    };
  });
}

Pattern 3: Timers Not Cleared

The Leak:

// ❌ BAD: setTimeout/setInterval not cleared
class DataPoller {
  private intervalId?: NodeJS.Timeout;

  startPolling(url: string) {
    this.intervalId = setInterval(async () => {
      const data = await fetch(url);
      // Process data...
      // Captures 'this' and all instance properties
    }, 5000);
  }

  // If stopPolling never called, timer runs forever!
}

const poller = new DataPoller();
poller.startPolling('https://api.example.com/data');
// poller goes out of scope but timer keeps running,
// keeping poller in memory forever

The Fix:

// ✅ GOOD: Always clear timers
class DataPoller {
  private intervalId?: NodeJS.Timeout;

  startPolling(url: string) {
    this.stopPolling(); // Clear any existing timer

    this.intervalId = setInterval(async () => {
      const data = await fetch(url);
      // Process data...
    }, 5000);
  }

  stopPolling() {
    if (this.intervalId) {
      clearInterval(this.intervalId);
      this.intervalId = undefined;
    }
  }

  destroy() {
    this.stopPolling();
  }
}

// Usage with cleanup
const poller = new DataPoller();
poller.startPolling('https://api.example.com/data');

// Always clean up
process.on('SIGTERM', () => {
  poller.destroy();
});

Pattern 4: Unbounded Cache Growth

The Leak:

// ❌ BAD: Cache grows without bounds
const cache = new Map<string, any>();

async function getCachedData(key: string): Promise<any> {
  if (cache.has(key)) {
    return cache.get(key);
  }

  const data = await fetchFromDatabase(key);
  cache.set(key, data); // Never expires or evicts!
  return data;
}

The Fix:

// ✅ GOOD: LRU cache with size limit
import LRUCache from 'lru-cache';

const cache = new LRUCache<string, any>({
  max: 500, // Maximum 500 items
  ttl: 1000 * 60 * 5, // 5 minute TTL
  updateAgeOnGet: true,
  dispose: (value, key) => {
    // Cleanup when evicted
    console.log(`Evicted ${key} from cache`);
  },
});

async function getCachedData(key: string): Promise<any> {
  if (cache.has(key)) {
    return cache.get(key);
  }

  const data = await fetchFromDatabase(key);
  cache.set(key, data);
  return data;
}

Pattern 5: Stream Not Closed

The Leak:

// ❌ BAD: Streams not properly closed
import fs from 'fs';

async function processFile(filePath: string) {
  const stream = fs.createReadStream(filePath);

  stream.on('data', (chunk) => {
    // Process chunk
  });

  // If error occurs or process exits, stream may not close!
  // Native file descriptor leaks
}

The Fix:

// ✅ GOOD: Always close streams
import fs from 'fs';
import { pipeline } from 'stream/promises';

async function processFile(filePath: string) {
  const stream = fs.createReadStream(filePath);

  try {
    await pipeline(stream, async function* (source) {
      for await (const chunk of source) {
        // Process chunk
        yield processChunk(chunk);
      }
    });
  } finally {
    // Ensures stream is closed even on error
    stream.close();
  }
}

// Or use stream pipeline for automatic cleanup
import { pipeline } from 'stream';
import { promisify } from 'util';

const pipelineAsync = promisify(pipeline);

async function processFileWithPipeline(inputPath: string, outputPath: string) {
  await pipelineAsync(fs.createReadStream(inputPath), transformStream(), fs.createWriteStream(outputPath));
  // All streams automatically closed
}

Real-World Memory Leak Case Study

The Problem

A production Express.js API started crashing every 12-18 hours with OOM errors. Memory usage showed steady growth from 150MB to 1.4GB before crashing.

The Investigation

Step 1: Identify the trend

// Added monitoring
import { getMemoryMetrics, startMemoryMonitoring } from './memory-monitor';

startMemoryMonitoring(60000); // Log every minute

Output showed consistent linear growth: ~1MB/minute.

Step 2: Take heap snapshots

# Snapshot at startup
curl -X POST http://localhost:3000/admin/heap-snapshot?label=startup

# Snapshot after 1 hour
curl -X POST http://localhost:3000/admin/heap-snapshot?label=1hour

# Snapshot after 4 hours
curl -X POST http://localhost:3000/admin/heap-snapshot?label=4hours

Step 3: Analyze in Chrome DevTools

Comparing snapshots revealed:

  • 400,000+ IncomingMessage objects
  • 400,000+ Socket objects
  • All referenced by a single Array

Step 4: Find the source

// The culprit: Request logging middleware
const requestLog: any[] = [];

app.use((req, res, next) => {
  // ❌ BAD: Keeps ALL request objects forever!
  requestLog.push({
    timestamp: Date.now(),
    method: req.method,
    url: req.url,
    req: req, // <-- This retains the entire request object
    // including sockets, buffers, etc.
  });
  next();
});

The Fix

// ✅ GOOD: Log only what's needed + rotation
import { CircularBuffer } from './circular-buffer';

const requestLog = new CircularBuffer<RequestLog>(1000); // Max 1000 entries

interface RequestLog {
  timestamp: number;
  method: string;
  url: string;
  ip: string;
  userAgent: string;
  // No reference to req object!
}

app.use((req, res, next) => {
  requestLog.push({
    timestamp: Date.now(),
    method: req.method,
    url: req.url,
    ip: req.ip,
    userAgent: req.get('user-agent') || 'unknown',
  });
  next();
});

Circular Buffer Implementation:

// circular-buffer.ts
export class CircularBuffer<T> {
  private buffer: T[];
  private writeIndex: number = 0;
  private isFull: boolean = false;

  constructor(private capacity: number) {
    this.buffer = new Array(capacity);
  }

  push(item: T): void {
    this.buffer[this.writeIndex] = item;
    this.writeIndex = (this.writeIndex + 1) % this.capacity;

    if (this.writeIndex === 0) {
      this.isFull = true;
    }
  }

  getAll(): T[] {
    if (!this.isFull) {
      return this.buffer.slice(0, this.writeIndex);
    }
    return [...this.buffer.slice(this.writeIndex), ...this.buffer.slice(0, this.writeIndex)];
  }

  clear(): void {
    this.buffer = new Array(this.capacity);
    this.writeIndex = 0;
    this.isFull = false;
  }

  get size(): number {
    return this.isFull ? this.capacity : this.writeIndex;
  }
}

The Result

After deploying the fix:

  • Memory stabilized at ~180MB
  • No more OOM crashes
  • Application uptime increased from 12-18 hours to weeks

Prevention Strategies

1. Use WeakMap for Object Associations

// ✅ GOOD: WeakMap doesn't prevent GC
const objectMetadata = new WeakMap<object, Metadata>();

function attachMetadata(obj: object, metadata: Metadata) {
  objectMetadata.set(obj, metadata);
  // When obj is GC'd, the metadata is also freed
}

2. Implement Object Pooling

// object-pool.ts
export class ObjectPool<T> {
  private available: T[] = [];
  private inUse = new Set<T>();

  constructor(
    private factory: () => T,
    private reset: (obj: T) => void,
    private maxSize: number = 100,
  ) {
    // Pre-allocate some objects
    for (let i = 0; i < Math.min(10, maxSize); i++) {
      this.available.push(factory());
    }
  }

  acquire(): T {
    let obj = this.available.pop();

    if (!obj) {
      if (this.inUse.size >= this.maxSize) {
        throw new Error('Object pool exhausted');
      }
      obj = this.factory();
    }

    this.inUse.add(obj);
    return obj;
  }

  release(obj: T): void {
    if (!this.inUse.has(obj)) {
      throw new Error('Object not from this pool');
    }

    this.inUse.delete(obj);
    this.reset(obj);
    this.available.push(obj);
  }

  get stats() {
    return {
      available: this.available.length,
      inUse: this.inUse.size,
      total: this.available.length + this.inUse.size,
    };
  }
}

// Usage
const bufferPool = new ObjectPool(
  () => Buffer.allocUnsafe(1024),
  (buf) => buf.fill(0),
  50,
);

const buffer = bufferPool.acquire();
try {
  // Use buffer
} finally {
  bufferPool.release(buffer);
}

3. Set Up Automated Memory Monitoring

# kubernetes-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodejs-app
spec:
  template:
    spec:
      containers:
        - name: app
          image: nodejs-app:latest
          resources:
            requests:
              memory: '256Mi'
              cpu: '200m'
            limits:
              memory: '512Mi' # Hard limit prevents OOM killing other pods
              cpu: '500m'
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 30
            periodSeconds: 10
          # Memory usage alerting
          env:
            - name: MEMORY_ALERT_THRESHOLD_PERCENT
              value: '80'

4. Regular Heap Snapshot Audits

// scheduled-heap-audit.ts
import cron from 'node-cron';
import { takeHeapSnapshot } from './heap-snapshot';

// Take heap snapshot daily at 3am
cron.schedule('0 3 * * *', () => {
  console.log('Taking scheduled heap snapshot');
  const snapshot = takeHeapSnapshot('daily-audit');

  // Upload to S3 or artifact storage for analysis
  uploadSnapshotToS3(snapshot);
});

Tool Comparison for Memory Debugging

Tool Best For Difficulty Production Safe? Cost
Chrome DevTools Deep analysis, heap snapshots Medium No (overhead) Free
Clinic.js Quick diagnostics Easy No (overhead) Free
Node --inspect Development debugging Easy No (opens debug port) Free
Datadog APM Continuous monitoring Easy Yes $$$
New Relic APM with memory tracking Easy Yes $$$
Elastic APM Open source APM Medium Yes Free/$$
Prometheus + Grafana Custom metrics Medium Yes Free
heapdump module Programmatic snapshots Easy ⚠️ (careful) Free

Conclusion

Memory leaks in Node.js are preventable with:

  1. Awareness of common patterns (listeners, timers, closures, caches)
  2. Monitoring to detect growth early
  3. Tools to diagnose root causes (heap snapshots, profilers)
  4. Prevention through code review and automated checks

The key is to catch leaks early—ideally in development or staging—rather than discovering them in production when your app crashes at 3am.

Remember the golden rules:

  • Always remove event listeners
  • Clear timers when done
  • Use bounded caches (LRU)
  • Close streams and connections
  • Minimize closure scope
  • Monitor memory in production

Ready to add comprehensive memory monitoring to your Node.js applications? Sign up for ScanlyApp and get automated performance and memory leak detection integrated into your development workflow today.

Related articles: Also see a complete performance testing guide covering memory and beyond, database connection pool leaks that mirror Node.js memory issues, and observability tooling needed to detect memory leaks in production.

Related Posts