PgBouncer is lightweight connection pooler sitting between Node.js and PostgreSQL. Benefits: (1) Handles 10,000+ concurrent clients with minimal memory (~1MB RAM per 1000 connections), (2) Responds to connection requests instantly (no PostgreSQL connection overhead), (3) Efficiently manages limited PostgreSQL connections (max_connections default is 100). Problem without PgBouncer: Node.js pg pool exhausted under load (default 10 connections), PostgreSQL max_connections reached (each connection uses ~10MB RAM), connection establishment slow (TCP handshake + auth). PgBouncer solution: Multiplexes thousands of client connections onto few PostgreSQL connections. Install: apt-get install pgbouncer (Debian/Ubuntu), brew install pgbouncer (Mac). Lightweight: ~10MB memory footprint regardless of client count. Acts as proxy: Node connects to PgBouncer port, PgBouncer connects to PostgreSQL.
Node.js Pgbouncer Pooling FAQ & Answers
12 expert Node.js Pgbouncer Pooling answers researched from official documentation. Every answer cites authoritative sources you can verify.
unknown
12 questionsSession Pooling: Connection held for entire client session. Client connects, gets dedicated PostgreSQL connection until disconnect. Use when: App uses session features (temp tables, prepared statements, advisory locks). Most compatible but least efficient. Transaction Pooling (RECOMMENDED): Connection returned to pool after transaction commits/rolls back. Multiple clients share one connection across transactions. Use for: Most Node.js apps (REST APIs, web servers). Balance of performance and compatibility. 90% of apps use this. Statement Pooling: Connection returned after each statement. Highest performance but breaks: Prepared statements, multi-statement transactions. Rarely used. Configure: pool_mode = transaction in pgbouncer.ini. Session mode: 100 clients = 100 PostgreSQL connections. Transaction mode: 100 clients = 10 PostgreSQL connections (10x efficiency). Statement mode: 100 clients = 1-2 connections.
Configure pgbouncer.ini: [databases] mydb = host=localhost dbname=mydb port=5432, [pgbouncer] listen_addr = 127.0.0.1, listen_port = 6432, pool_mode = transaction, default_pool_size = 25, max_client_conn = 1000, auth_type = md5, auth_file = /etc/pgbouncer/userlist.txt. Key settings: default_pool_size (PostgreSQL connections per database, 20-30 recommended), max_client_conn (max clients, 1000-10000), pool_mode (transaction for Node.js). Node.js connection: const pool = new Pool({host: 'localhost', port: 6432, database: 'mydb'}). Connect to PgBouncer port (6432), not PostgreSQL port (5432). Auth: Create userlist.txt: "myuser" "md5
Set default_pool_size to 20-30 per database for transaction pooling. Formula: pool_size = (number of CPU cores × 2) + effective_spindle_count. For SSD: cores × 2 to cores × 4. Example: 4-core server with SSD = 8-16 connections. Start with 25, monitor utilization. Check: SHOW POOLS; shows cl_active (active clients), sv_active (active PostgreSQL connections), sv_idle (idle connections). Target: sv_active + sv_idle ≈ default_pool_size. If sv_active consistently maxed and clients waiting, increase pool_size. If mostly idle, decrease. PostgreSQL side: max_connections should be > (default_pool_size × number of databases + 10 for superuser). Example: PgBouncer pool_size=25, 4 databases = PostgreSQL max_connections > 110. Don't set too high: Each PostgreSQL connection uses ~10MB RAM.
Connect to PgBouncer admin console: psql -p 6432 -U pgbouncer pgbouncer. Key commands: (1) SHOW POOLS; displays per-database stats: cl_active (clients executing queries), cl_waiting (clients queued), sv_active (busy PostgreSQL connections), sv_idle (idle connections), maxwait (max queue wait time). (2) SHOW STATS; shows total queries, query time, transaction count. (3) SHOW CLIENTS; lists all client connections. (4) SHOW SERVERS; lists PostgreSQL connections. Alerts: (1) cl_waiting > 0 sustained = pool exhausted, increase default_pool_size. (2) maxwait > 5 seconds = severe queueing. (3) sv_active ≈ default_pool_size = pool at capacity. Integrate with Prometheus: pgbouncer_exporter exports metrics. Grafana dashboard: Query rate, connection pool usage, wait times. Health check pattern: If maxwait > 10s, trigger alert.
Use transaction pooling for 90% of Node.js applications. Transaction mode: Connection returned after COMMIT/ROLLBACK, available for next transaction. Compatible with standard SQL patterns in Node.js (pg, Sequelize, TypeORM). Works with: Basic queries, transactions using BEGIN/COMMIT, prepared statements (per-transaction). Doesn't work with: Temporary tables, advisory locks, LISTEN/NOTIFY, session-level prepared statements. Session mode: Only use if app requires session features (rare in Node.js). Less efficient: 1 PostgreSQL connection per client. Example compatible code: await client.query('BEGIN'); await client.query('INSERT...'); await client.query('COMMIT') - connection returned to pool after COMMIT. Incompatible: CREATE TEMP TABLE (exists only in session). For LISTEN/NOTIFY: Use separate dedicated connection outside PgBouncer. 95% of REST APIs work perfectly with transaction mode.
PgBouncer is lightweight connection pooler sitting between Node.js and PostgreSQL. Benefits: (1) Handles 10,000+ concurrent clients with minimal memory (~1MB RAM per 1000 connections), (2) Responds to connection requests instantly (no PostgreSQL connection overhead), (3) Efficiently manages limited PostgreSQL connections (max_connections default is 100). Problem without PgBouncer: Node.js pg pool exhausted under load (default 10 connections), PostgreSQL max_connections reached (each connection uses ~10MB RAM), connection establishment slow (TCP handshake + auth). PgBouncer solution: Multiplexes thousands of client connections onto few PostgreSQL connections. Install: apt-get install pgbouncer (Debian/Ubuntu), brew install pgbouncer (Mac). Lightweight: ~10MB memory footprint regardless of client count. Acts as proxy: Node connects to PgBouncer port, PgBouncer connects to PostgreSQL.
Session Pooling: Connection held for entire client session. Client connects, gets dedicated PostgreSQL connection until disconnect. Use when: App uses session features (temp tables, prepared statements, advisory locks). Most compatible but least efficient. Transaction Pooling (RECOMMENDED): Connection returned to pool after transaction commits/rolls back. Multiple clients share one connection across transactions. Use for: Most Node.js apps (REST APIs, web servers). Balance of performance and compatibility. 90% of apps use this. Statement Pooling: Connection returned after each statement. Highest performance but breaks: Prepared statements, multi-statement transactions. Rarely used. Configure: pool_mode = transaction in pgbouncer.ini. Session mode: 100 clients = 100 PostgreSQL connections. Transaction mode: 100 clients = 10 PostgreSQL connections (10x efficiency). Statement mode: 100 clients = 1-2 connections.
Configure pgbouncer.ini: [databases] mydb = host=localhost dbname=mydb port=5432, [pgbouncer] listen_addr = 127.0.0.1, listen_port = 6432, pool_mode = transaction, default_pool_size = 25, max_client_conn = 1000, auth_type = md5, auth_file = /etc/pgbouncer/userlist.txt. Key settings: default_pool_size (PostgreSQL connections per database, 20-30 recommended), max_client_conn (max clients, 1000-10000), pool_mode (transaction for Node.js). Node.js connection: const pool = new Pool({host: 'localhost', port: 6432, database: 'mydb'}). Connect to PgBouncer port (6432), not PostgreSQL port (5432). Auth: Create userlist.txt: "myuser" "md5
Set default_pool_size to 20-30 per database for transaction pooling. Formula: pool_size = (number of CPU cores × 2) + effective_spindle_count. For SSD: cores × 2 to cores × 4. Example: 4-core server with SSD = 8-16 connections. Start with 25, monitor utilization. Check: SHOW POOLS; shows cl_active (active clients), sv_active (active PostgreSQL connections), sv_idle (idle connections). Target: sv_active + sv_idle ≈ default_pool_size. If sv_active consistently maxed and clients waiting, increase pool_size. If mostly idle, decrease. PostgreSQL side: max_connections should be > (default_pool_size × number of databases + 10 for superuser). Example: PgBouncer pool_size=25, 4 databases = PostgreSQL max_connections > 110. Don't set too high: Each PostgreSQL connection uses ~10MB RAM.
Connect to PgBouncer admin console: psql -p 6432 -U pgbouncer pgbouncer. Key commands: (1) SHOW POOLS; displays per-database stats: cl_active (clients executing queries), cl_waiting (clients queued), sv_active (busy PostgreSQL connections), sv_idle (idle connections), maxwait (max queue wait time). (2) SHOW STATS; shows total queries, query time, transaction count. (3) SHOW CLIENTS; lists all client connections. (4) SHOW SERVERS; lists PostgreSQL connections. Alerts: (1) cl_waiting > 0 sustained = pool exhausted, increase default_pool_size. (2) maxwait > 5 seconds = severe queueing. (3) sv_active ≈ default_pool_size = pool at capacity. Integrate with Prometheus: pgbouncer_exporter exports metrics. Grafana dashboard: Query rate, connection pool usage, wait times. Health check pattern: If maxwait > 10s, trigger alert.
Use transaction pooling for 90% of Node.js applications. Transaction mode: Connection returned after COMMIT/ROLLBACK, available for next transaction. Compatible with standard SQL patterns in Node.js (pg, Sequelize, TypeORM). Works with: Basic queries, transactions using BEGIN/COMMIT, prepared statements (per-transaction). Doesn't work with: Temporary tables, advisory locks, LISTEN/NOTIFY, session-level prepared statements. Session mode: Only use if app requires session features (rare in Node.js). Less efficient: 1 PostgreSQL connection per client. Example compatible code: await client.query('BEGIN'); await client.query('INSERT...'); await client.query('COMMIT') - connection returned to pool after COMMIT. Incompatible: CREATE TEMP TABLE (exists only in session). For LISTEN/NOTIFY: Use separate dedicated connection outside PgBouncer. 95% of REST APIs work perfectly with transaction mode.