nginx 50 Q&As

NGINX FAQ & Answers

50 expert NGINX answers researched from official documentation. Every answer cites authoritative sources you can verify.

unknown

50 questions
A

Nginx (nginx 1.26+ stable as of 2025) is open-source web server, reverse proxy, load balancer, and HTTP cache using event-driven architecture. Key features: handles 10,000+ concurrent connections efficiently with low resource usage, reverse proxy for HTTP/HTTPS/TCP/UDP protocols, load balancing with 6 algorithms (round-robin, least_conn, ip_hash, hash, random, random two choices), SSL/TLS 1.2/1.3 termination, HTTP/2 support and experimental HTTP/3 with QUIC (nginx 1.25+), proxy caching, WebSocket proxying, rate limiting with leaky bucket algorithm, gzip/brotli compression, URL rewriting. Use cases: web server (superior static file performance vs Apache), reverse proxy for Node.js/Python/Java apps, API gateway, load balancer, media streaming. Configuration: directive-based syntax in /etc/nginx/nginx.conf. Performance: Event-driven architecture enables high concurrency with minimal memory footprint.

99% confidence
A

Nginx config uses directive-based syntax with hierarchical contexts (nginx 1.26+). Main contexts: main (global settings like user, worker_processes), events (connection processing: worker_connections), http (HTTP server settings: gzip, keepalive), server (virtual host definitions), location (URI-specific routing and handling). Syntax rules: Directives end with semicolon (;), blocks use curly braces {}, include files with include /etc/nginx/conf.d/*.conf;, comments start with #. Main config file: /etc/nginx/nginx.conf. Essential commands: nginx -t (test config syntax), nginx -s reload (graceful reload without downtime), nginx -s stop (fast shutdown), nginx -s quit (graceful shutdown). Example structure: http { upstream backend { server srv1:8080; } server { listen 80; server_name example.com; location / { root /var/www; try_files $uri $uri/ =404; } location /api/ { proxy_pass http://backend; } } }. Best practice (2025): Modular configs in /etc/nginx/conf.d/, test before reload.

99% confidence
A

Server blocks (virtual hosts) define separate virtual servers on single nginx instance, enabling multi-site hosting. Syntax: server { listen 80; server_name example.com www.example.com; root /var/www/example; }. listen directive: Binds to IP:port (listen 80; for HTTP, listen 443 ssl http2; for HTTPS with HTTP/2). Multiple listen directives supported. default_server flag: Marks catch-all server for unmatched requests (listen 80 default_server;). server_name directive: Defines virtual host names with multiple formats: exact names (example.com), wildcard prefix (.example.com), wildcard suffix (mail.), regex (~^(?\w+).example.com$). Server selection algorithm: (1) Exact name match, (2) Longest wildcard prefix (.example.com), (3) Longest wildcard suffix (mail.), (4) First matching regex (order matters), (5) Default server. Server names stored in hash tables for O(1) lookup. Best practice (2025): Use exact names for performance, one default_server per listen port, separate server blocks per domain for clarity. Essential for hosting multiple sites on single server.

99% confidence
A

location blocks configure URI-specific request handling using modifiers and priority matching algorithm. Syntax: location [modifier] pattern { directives; }. Modifiers: = (exact match, highest priority), ^~ (prefix match that stops regex evaluation), ~ (case-sensitive regex), * (case-insensitive regex), none (standard prefix match). Matching algorithm (nginx 1.26+): (1) Check all exact matches (=), use immediately if found. (2) Find longest prefix match using red-black tree O(log n). If uses ^, stop and use it. (3) Evaluate regex locations (~ and *) sequentially in config order, use first match. (4) If no regex matched, use stored longest prefix. Example: location = /login { } (exact), location ^ /static/ { } (prefix, no regex), location ~* .(jpg|png|gif)$ { } (case-insensitive regex), location / { } (catch-all). Variables available: $uri (normalized URI), $request_uri (original with query string), $args (query parameters). Nesting supported: location /api/ { location ~ .json$ { } }. Best practice (2025): Use exact/prefix for performance (O(log n)), regex only when necessary (O(n) sequential), place specific matches before general ones.

99% confidence
A

Reverse proxy forwards client requests to backend servers, acting as intermediary for security, performance, and scalability (nginx 1.26+ recommended). Basic configuration: upstream backend { server backend1.example.com:8080; server backend2.example.com:8080; } server { listen 80; location /api/ { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Connection ""; } }. Essential headers: Host (preserves original host), X-Real-IP (client IP), X-Forwarded-For (proxy chain), X-Forwarded-Proto (original protocol http/https). Performance directives: proxy_http_version 1.1; (enable keepalive to backends), proxy_set_header Connection ""; (clear connection header for upstream keepalive), proxy_buffering on; (default, buffers responses). Benefits: SSL/TLS termination at proxy, load balancing across backends, caching for performance, security (hide backend infrastructure), single entry point for microservices. Use cases: Node.js/Python/Java apps, API gateways, microservices. Best practice (2025): Use upstream blocks for multiple backends, enable HTTP/1.1 and keepalive for performance, set appropriate timeouts (proxy_read_timeout, proxy_connect_timeout).

99% confidence
A

Nginx distributes traffic across multiple backend servers using upstream blocks and 6 load balancing algorithms (nginx 1.26+). Configuration: upstream backend { least_conn; server backend1.example.com:8080 weight=3 max_fails=3 fail_timeout=30s; server backend2.example.com:8080; server backend3.example.com:8080 backup; }. Algorithms: (1) round-robin (default, sequential distribution with weight support), (2) least_conn (routes to server with fewest active connections), (3) ip_hash (sticky sessions based on client IP hash), (4) hash $variable (custom key like URI or cookie), (5) random (random selection), (6) random two choices (nginx 1.15.1+, select 2 random servers, route to one with fewer connections). Server parameters: weight (relative weight, default 1), max_fails (mark down after N failures, default 1), fail_timeout (time to mark down, default 10s), backup (only used when primary servers fail), down (permanently mark unavailable). Passive health checks: Monitor actual requests, mark down after max_fails in fail_timeout window. Use with: location /api/ { proxy_pass http://backend; }. Benefits: High availability, horizontal scalability, no SPOF, automatic failover. Best practice (2025): Use least_conn for balanced load, set max_fails=3 fail_timeout=30s for stability, add backup servers for critical services.

99% confidence
A

SSL/TLS encrypts HTTP traffic (nginx 1.13.0+ required for TLS 1.3, OpenSSL 1.1.1+ required). 2025 best practice configuration: server { listen 443 ssl http2; server_name example.com; ssl_certificate /etc/ssl/certs/example.com.crt; ssl_certificate_key /etc/ssl/private/example.com.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers off; ssl_ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-RSA-AES256-GCM-SHA384; ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/ssl/certs/ca-bundle.crt; resolver 8.8.8.8 8.8.4.4 valid=300s; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; }. HTTP to HTTPS redirect: server { listen 80; return 301 https://$host$request_uri; }. Key directives: ssl_protocols (TLS 1.2/1.3 only, disable older versions), ssl_prefer_server_ciphers off (let client choose TLS 1.3 ciphers), ssl_session_cache (improves performance, 1MB stores ~4000 sessions), OCSP stapling (validates certificate without client contacting CA), HSTS (force HTTPS, preload for browser list). Certificate sources: Let's Encrypt (free, auto-renewal with certbot). Requirements: nginx 1.13.0+ for TLS 1.3, OpenSSL 1.1.1+ for modern ciphers. Security: A+ rating on SSL Labs with this config.

99% confidence
A

try_files checks file existence in specified order, serves first match, essential for SPAs with client-side routing (React, Vue, Angular). Syntax: try_files file1 file2 ... uri | =code;. SPA configuration (2025 best practice): server { listen 80; root /usr/share/nginx/html; index index.html; location / { try_files $uri $uri/ /index.html; } }. How it works: (1) Check if $uri exists as file (e.g., /static/logo.png), (2) Check if $uri/ exists as directory with index, (3) Fallback to /index.html (triggers internal redirect). Why needed: SPAs handle routing client-side (/dashboard, /profile routes don't exist as files), direct access or refresh causes 404 without try_files, this config returns index.html which loads JS router. Named location fallback: location / { try_files $uri @backend; } location @backend { proxy_pass http://api:3000; } (try file, fallback to backend). Static files with error code: try_files $uri $uri/ =404; (return 404 if not found instead of fallback). Performance: More efficient than if statements (avoid "if is evil" anti-pattern), evaluated at file system level. Use cases: React/Vue/Angular SPAs, static sites with fallback, clean URLs without .html extension, API fallback patterns. Best practice (2025): Always use for SPAs, add $uri/ for directory index support, combine with caching headers for static assets.

99% confidence
A

Nginx proxy caching stores backend responses on disk for fast redelivery without hitting origin servers (nginx 1.0+). Two-step setup: (1) proxy_cache_path in http context defines cache location and parameters. (2) proxy_cache in location context enables caching. Complete example: http { proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; } server { location / { proxy_cache my_cache; proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; proxy_cache_key $scheme$proxy_host$uri$is_args$args; proxy_pass http://backend; add_header X-Cache-Status $upstream_cache_status; } }. Key directives: levels (2-level directory hierarchy prevents performance degradation), keys_zone (shared memory zone storing cache metadata, 1MB stores ~8,000 keys), inactive (remove unaccessed entries after duration), use_temp_path=off (write directly to cache, not temporary files), proxy_cache_valid (set validity period per status code). Mechanism: Cache manager process evicts least-recently-used items when max_size exceeded. Respects origin Cache-Control headers. Performance: proxy_cache_lock (one request fills cache, others wait), proxy_cache_background_update (refresh stale content asynchronously), proxy_cache_use_stale (serve expired content during backend failures). Monitoring headers: $upstream_cache_status returns HIT, MISS, BYPASS, EXPIRED. Best practice (2025): Always specify proxy_cache_valid per status code, enable cache locking to prevent thundering herd, implement background updates for zero-downtime refreshes.

99% confidence
A

rewrite modifies request URI using regex patterns, but return directive is preferred for simple redirects (2025 best practice). Syntax: rewrite regex replacement [flag];. Flags: last (stop rewrite processing, search new location block), break (stop processing, serve from current location), redirect (return 302 temporary), permanent (return 301 permanent). Example: rewrite ^/old-page$ /new-page permanent; rewrite ^/product/(\d+)$ /products?id=$1 last;. Regex captures: Use $1, $2... for captured groups. return directive (cleaner alternative): return 301 https://$host$request_uri; (HTTP to HTTPS), return 301 /new-url; (simple redirect). When to use each: Use return for simple redirects (faster, more efficient), use rewrite for complex regex transformations, use try_files for file existence checks. Evaluation order: Rewrites execute sequentially within location block, processed before other directives. Best practice (2025): Prefer return over rewrite when possible (cleaner, faster), avoid complex rewrite chains (performance impact), use try_files for SPAs instead of rewrites, test with nginx -t before deployment. Common use cases: SEO-friendly URLs (rewrite ^/blog/(\w+)$ /blog.php?slug=$1 last;), forcing www subdomain (return 301 https://www.$host$request_uri;), canonical URLs (permanent redirects). Performance: return is faster than rewrite for simple redirects, minimize regex complexity.

99% confidence
A

Nginx excels at static file serving using zero-copy sendfile() system call (6Gbps → 30Gbps throughput per Netflix). 2025 optimized configuration: location / { root /var/www/html; sendfile on; sendfile_max_chunk 512k; tcp_nopush on; tcp_nodelay on; } location ~* .(jpg|jpeg|png|gif|ico|svg|webp)$ { expires 1y; add_header Cache-Control "public, immutable"; } location ~* .(css|js)$ { expires 1y; add_header Cache-Control "public, immutable"; gzip_static on; }. Key directives: sendfile on; (use OS sendfile() for zero-copy transfer, dramatically faster than read/write), sendfile_max_chunk 512k; (prevent one connection monopolizing worker, improves fairness), tcp_nopush on; (send full packets, works with sendfile to batch headers+data in one packet), tcp_nodelay on; (disable 200ms Nagle delay for last packet), gzip_static on; (serve pre-compressed .gz files if exist). Caching headers: expires 1y; for versioned assets (main.abc123.js), expires 1h; for dynamic content, add_header Cache-Control "public, immutable"; (browser never revalidates). File pattern matching: Use location ~* regex for case-insensitive extensions. Best practice (2025): Enable sendfile + tcp_nopush + tcp_nodelay together, pre-compress static assets (css.gz, js.gz) with gzip_static, use long cache times with versioned filenames, serve WebP/AVIF images for modern browsers. Performance: Nginx 10x faster than Node.js/Python for static files.

99% confidence
A

Nginx request rate limiting uses the leaky bucket algorithm to protect APIs and endpoints from abuse and DDoS attacks (nginx 1.0+). Two-step configuration: (1) Define zone in http context: limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; where rate=10r/s means 1 request per 100ms. (2) Apply limit in location context: limit_req zone=api burst=5 nodelay;. Critical parameters: rate (requests per second r/s or per minute r/m), burst (queues up to N requests exceeding the rate limit), nodelay (forwards burst requests immediately instead of spacing them out, essential for APIs and login endpoints). Without nodelay, excess requests are queued and delayed sequentially, degrading user experience. Returns 503 Service Unavailable when hard limit exceeded. Configuration examples: Sensitive endpoints: limit_req zone=api burst=3 nodelay; (tight limits for /login), General APIs: limit_req zone=api burst=10-20 nodelay;. Memory efficiency: Use $binary_remote_addr (4/16 bytes) instead of $remote_addr (7-15 bytes). Zone sizing: 10m stores ~160,000 IP addresses. Best practice (2025): Always combine burst+nodelay together, adjust burst based on endpoint sensitivity, enable detailed logging to monitor violations.

99% confidence
A

Nginx variables store values for dynamic configuration, evaluated lazily at runtime. Built-in variables: $uri (normalized request URI without args), $request_uri (original URI with query string), $args (query parameters), $host (Host header or server name), $remote_addr (client IP, 7-15 bytes), $binary_remote_addr (binary IP, 4/16 bytes for IPv4/IPv6, memory efficient), $scheme (http or https), $request_method (GET/POST/etc), $http_name (request header, e.g., $http_user_agent), $upstream_cache_status (HIT/MISS/BYPASS), $server_name, $document_root. Custom variables: set $variable value; (simple assignment), map $http_upgrade $connection_upgrade { default upgrade; '' close; } (conditional logic, nginx 0.9.0+). Use cases: proxy_set_header X-Real-IP $remote_addr; (forward client IP), return 301 https://$host$request_uri; (redirect preserving URL), log_format main '$remote_addr - $request "$status"'; (access logs), if ($request_method = POST) { return 405; } (conditional processing). Best practice (2025): Use map directive instead of if for complex conditions (map evaluated once at startup, more efficient), use $binary_remote_addr in rate limiting (saves memory), avoid if when possible ("if is evil" - limited context support). Variable interpolation: Supported in strings since nginx 1.11.0 (set $full_url "https://$host$uri";). Scope: Variables are global, accessible from any context once defined.

99% confidence
A

Nginx proxies WebSocket connections using the Upgrade and Connection hop-by-hop headers that must be explicitly forwarded to backends. Minimal configuration: location /ws/ { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; }. Critical: proxy_http_version 1.1 is mandatory—HTTP 1.0 doesn't support WebSocket upgrade negotiation. For both HTTP and WebSocket traffic on same location, use map directive: map $http_upgrade $connection_upgrade { default upgrade; '' close; } then proxy_set_header Connection $connection_upgrade; This ensures connections close properly when no upgrade is requested. Default proxy_read_timeout is 60 seconds, which terminates idle connections—increase to 3600s (1 hour) or implement backend ping/pong frames for keep-alive. Compatible with Socket.io, ws library, native WebSocket API.

99% confidence
A

root and alias serve files with different path resolution, critical distinction to avoid 404 errors. root directive: Appends full URI to root path. Example: root /var/www/html; location /images/ { } → Request /images/logo.png serves /var/www/html/images/logo.png. Path = root + full_URI. alias directive: Replaces location part of URI with alias path. Example: location /images/ { alias /data/pictures/; } → Request /images/logo.png serves /data/pictures/logo.png. Path = alias + (URI - location). Key differences: root appends entire URI (including location), alias substitutes location prefix. Common error: alias must end with / if location ends with / (location /images/ { alias /data/pictures/; } ✓, alias /data/pictures ✗). Performance: root is faster (simple string concatenation vs substring replacement). Use cases for root: Standard directory serving where URI matches filesystem structure, most common use case (location / { root /var/www/html; }). Use cases for alias: Remapping URIs to different filesystem paths (location /static/ { alias /srv/assets/; }), serving files from outside webroot, legacy path compatibility. Best practice (2025): Use root by default (simpler, faster), use alias only when URI path differs from filesystem path, always match trailing / between location and alias. Example combining both: location / { root /var/www; } location /downloads/ { alias /mnt/storage/files/; }.

99% confidence
A

Nginx supports gzip (universal) and brotli (20-30% better compression, modern browsers), best practice is enabling both (2025). gzip configuration: http { gzip on; gzip_vary on; gzip_comp_level 5; gzip_min_length 1000; gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss application/atom+xml image/svg+xml; }. Brotli configuration (requires ngx_brotli module): brotli on; brotli_comp_level 6; brotli_types text/plain text/css application/json application/javascript application/xml+rss; brotli_static on;. Key directives: gzip_comp_level 5-6 (balance compression/CPU, 1=fast/low, 9=slow/high), gzip_min_length 1000; (skip files <1KB, overhead not worth it), gzip_vary on; (add Vary: Accept-Encoding header for caching proxies), gzip_types (MIME types to compress, text/html always included), gzip_static on; (serve pre-compressed .gz files). Brotli advantages: 15-25% better compression than gzip, superior for UTF-8 text (HTML/CSS/JS), improves Core Web Vitals scores. Use both: Serve brotli to modern browsers (automatic fallback to gzip for legacy clients). Don't compress: Images (jpg/png), videos (mp4), already compressed formats (overhead without benefit). Best practice (2025): Enable both gzip and brotli, use level 4-6 for dynamic content (real-time compression), pre-compress static assets at build time (brotli_static, gzip_static), prioritize brotli for mobile performance.

99% confidence
A

error_page directive creates internal redirects for HTTP error codes without browser redirect. Syntax: error_page code [codes...] [=response] uri;. Basic example: error_page 404 /404.html; error_page 500 502 503 504 /50x.html;. Protect error pages from direct access: location = /404.html { internal; root /var/www/html; }. The internal directive prevents external requests to error URIs. Modify response code returned to client: error_page 404 =200 /empty.gif; returns HTTP 200 instead of 404. For complex handling, use named location fallback: error_page 404 @fallback; location @fallback { proxy_pass http://backend; }. Critical: error_page performs internal redirect (URI changes, method becomes GET), not HTTP redirect (no 3xx status). Best practice (2025): Always use internal directive, store error pages locally, combine with error logging for monitoring.

99% confidence
A

Nginx offers passive health checks (open source, included) and active health checks (Plus only). PASSIVE CHECKS: Monitor actual client requests to upstream servers. Mark server unavailable after max_fails consecutive failures occur within fail_timeout window. Configuration: upstream backend { server srv1.com max_fails=3 fail_timeout=30s; server srv2.com backup; }. Mechanism: If srv1 fails 3 times in 30 seconds, marked down for 30 seconds, then retried. Parameters: max_fails (default 1), fail_timeout (default 10s, sets both detection window and recovery duration). Additional parameter: slow_start=30s gradually ramps traffic to recovering servers. ACTIVE CHECKS (Plus only): Send periodic health probes independent of client requests. Configuration: upstream backend { zone upstream_zone 64k; server srv1.com; } location /api/ { health_check uri=/health interval=5s fails=3 passes=2; }. Mechanism: Sends requests every 5 seconds to /health, marks down after 3 failures, recovers after 2 successes. Custom match rules define pass criteria (status codes, headers, body content). Benefits: Detect failures before affecting clients, customizable health endpoints. Best practice (2025): Use passive checks with max_fails=3 fail_timeout=30s for open-source production, add backup servers for critical services, upgrade to Plus for active checks if needing proactive monitoring.

99% confidence
A

Nginx access control uses allow/deny directives (from ngx_http_access_module) for IP-based restrictions and auth_basic/auth_basic_user_file (from ngx_http_auth_basic_module) for HTTP authentication. IP-based syntax: allow address|CIDR|all; deny address|CIDR|all; Rules evaluated sequentially, first match stops processing—place deny all; last to block remaining IPs. Example: location /admin/ { allow 192.168.1.0/24; allow 10.0.0.1; deny all; } allows subnet 192.168.1.0/24 and IP 10.0.0.1, denies everything else. HTTP basic auth example: auth_basic "Restricted Area"; auth_basic_user_file /etc/nginx/.htpasswd; (generate with htpasswd -c /etc/nginx/.htpasswd username). Combine methods with satisfy directive: satisfy all; requires BOTH IP whitelist AND valid credentials. satisfy any; requires EITHER approved IP OR valid credentials. Production example: location /api/ { satisfy all; allow 192.168.1.0/24; deny all; auth_basic "API Access"; auth_basic_user_file conf/htpasswd; } Multi-layer security: IP for infrastructure, basic auth for humans, external IP lists possible via geo module. Geo-blocking (requires MaxMind GeoIP2 database): geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb; map $geoip2_data_country_iso_code $country { ~^CN|RU$ block; default pass; } if ($country = block) { return 403; }. Best practice (2025): Use satisfy all; for sensitive endpoints, store .htpasswd with 600 permissions, validate IPs in staging, combine with rate limiting for DDoS protection.

99% confidence
A

Nginx performance tuning (2025): Set worker_processes auto; to match CPU cores for optimal throughput. Configure worker_connections 1024-4096; (default 512, test incrementally under load—each worker handles this many concurrent connections). Enable zero-copy file transfer: sendfile on; tcp_nopush on; tcp_nodelay on; (dramatically improves static file performance, 6Gbps → 30Gbps throughput). Keepalive for clients: keepalive_timeout 15s; keepalive_requests 100; (reuse connections, reduce TCP handshake overhead). Keepalive for upstream backends: keepalive 32; inside upstream block (persistent connections reduce latency). Timeouts: client_body_timeout 10s; client_header_timeout 10s; send_timeout 10s; (prevent slow clients blocking worker processes). Compression: gzip on; gzip_comp_level 4; gzip_types text/css application/javascript application/json; (level 4 optimal CPU/compression balance at ~95% compression vs. level 9). Buffer client uploads: client_body_buffer_size 128k; (process in memory before writing to disk). Cache file metadata: open_file_cache max=10000 inactive=20s; (caches file descriptors and stat info). Logging: access_log /var/log/nginx/access.log main buffer=32k flush=1m; (batch disk writes, reduce I/O overhead). Proxy buffering: proxy_buffering on; (default, buffers backend responses). Critical principle: Change one setting at a time, test under realistic load, revert if metrics don't improve. Most systems benefit from worker_processes auto + sendfile + tcp_nopush + tcp_nodelay alone.

99% confidence
A

HTTP/2 enables multiplexing, header compression, and improved performance. Configuration: listen 443 ssl http2; (simple addition to existing SSL configuration). Requirements: Nginx 1.9.5+, SSL/TLS mandatory, all modern browsers supported. Benefits: Multiple requests over single TCP connection without head-of-line blocking, HPACK header compression reduces overhead, automatic prioritization. HTTP/3 with QUIC (Nginx 1.25.0+): listen 443 quic reuseport; listen 443 ssl; add_header Alt-Svc 'h3=":443"; ma=86400'; (QUIC uses UDP, requires OpenSSL 3.5.1+). Performance tuning: http3_max_concurrent_streams 1024; http3_stream_buffer_size 1m; quic_gso on; quic_retry on;. Network: Allow UDP traffic on port 443 for HTTP/3. Best practice (2025): Enable HTTP/2 for all HTTPS sites (automatic fallback from HTTP/3), use HTTP/3 for cutting-edge performance, test with Alt-Svc header to advertise HTTP/3 support. Protocol negotiation automatic via ALPN.

99% confidence
A

Nginx logs record HTTP requests (access_log) and server errors (error_log). Defaults: access_log /var/log/nginx/access.log, error_log /var/log/nginx/error.log with level warn. Error log levels (increasing severity): debug, info, notice, warn, error, crit, alert, emerg. Configuration: error_log /var/log/nginx/error.log warn;. Access log format with log_format directive (http block): log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';. Critical 2025 best practice: JSON logging for structured observability (Elasticsearch/Splunk integration). log_format json escape=json '{"time":"$time_iso8601","remote_addr":"$remote_addr","request":"$request","status":$status,"bytes":$body_bytes_sent,"duration":$request_time,"upstream":"$upstream_response_time","user_agent":"$http_user_agent"}'; then access_log /var/log/nginx/access.log json;. Performance optimization: access_log /path buffer=32k flush=1m; reduces disk I/O, open_log_file_cache max=10000 inactive=20s; caches file descriptors. Conditional logging: map $status $loggable { ~^[23] 0; default 1; } access_log /path if=$loggable; logs only 4xx/5xx errors. Centralized logging: access_log syslog:server=127.0.0.1:514 json; or syslog:server=remote.example.com,facility=local7,tag=nginx json;. Best practice (2025): Use JSON format for logging infrastructure, enable buffering+open_log_file_cache for high-traffic, rotate logs with logrotate, include $request_time and $upstream_response_time for performance debugging, send to ELK/Splunk/CloudWatch.

99% confidence
A

CORS in Nginx requires setting 4 core headers: Access-Control-Allow-Origin (which origin), Access-Control-Allow-Methods (HTTP verbs), Access-Control-Allow-Headers (custom headers), Access-Control-Expose-Headers (response headers JS can access). Production configuration with origin validation: map $http_origin $allow_origin { ~^https://(www.)?example.com$ $http_origin; default ''; } location /api/ { add_header Access-Control-Allow-Origin $allow_origin always; add_header Access-Control-Allow-Methods 'GET, POST, PUT, DELETE, OPTIONS' always; add_header Access-Control-Allow-Headers 'Authorization, Content-Type, Accept' always; add_header Access-Control-Expose-Headers 'Content-Length, X-JSON-Response-Count' always; add_header Access-Control-Max-Age 86400 always; if ($request_method = OPTIONS) { return 204; } proxy_pass http://backend; }. Critical: OPTIONS preflight requests must return 204 No Content with CORS headers (use always flag for all status codes). For credentialed requests: add_header Access-Control-Allow-Credentials true always; but then must use specific origin, never wildcard '*'. Security: Always validate origins against whitelist, never echo arbitrary $http_origin without validation. Access-Control-Max-Age 86400 (1 day) caches preflight results, reducing redundant OPTIONS requests.

99% confidence
A

NGINX Open Source (free): Community-driven, passive health checks only (monitor actual requests, detect failures post-facto), no SLA support. Both versions: Identical core engine, reverse proxy, load balancing, SSL/TLS, compression, caching. NGINX Plus ($849-$2,099/year, 2025 pricing): Commercial subscription from F5 with enterprise features. Plus-exclusive capabilities: (1) Active health checks: Send periodic probes independently from client requests, detect failures before clients affected, customizable intervals/pass/fail criteria. (2) REST API: Add/modify/remove upstream servers without process reload (dynamic reconfiguration). (3) Live activity monitoring: Real-time dashboard with performance metrics, logs, upstream status. (4) Session persistence: Sticky session methods (sticky cookie, sticky route, sticky learn) for stateful apps. (5) JWT/OIDC authentication: Native OAuth/SAML support without third-party modules. (6) Key-value store, clustering, WAF integration ($2,000/year WAF addon). Use Plus if: Enterprise deployments requiring zero-downtime updates, need automated health management, require 24/7 F5 support, handling dynamic infrastructure (auto-scaling). Use open source if: Shared hosting, static topology, budget constraints, community support sufficient. Decision matrix: <100 servers → open source adequate; >100 servers or dynamic config → Plus value clear. Best practice (2025): Open source covers 90% of deployments; Plus becomes essential at enterprise scale with operational automation requirements.

99% confidence
A

Nginx provides zero-downtime configuration testing and reloading (nginx 1.26+). Test syntax: nginx -t (validates syntax and semantics, shows errors with file:line). Output example: nginx: configuration file /etc/nginx/nginx.conf test is successful. Test specific config: nginx -t -c /path/to/nginx.conf (test alternative config file). Graceful reload: nginx -s reload (loads new config without dropping connections, workers finish current requests before shutting down, new workers start with new config). Signal commands: nginx -s reload (graceful reload, HUP signal), nginx -s stop (fast shutdown, TERM signal), nginx -s quit (graceful shutdown, QUIT signal, wait for workers to finish), nginx -s reopen (reopen log files, USR1 signal, essential for log rotation). Service management (systemd): systemctl reload nginx (equivalent to nginx -s reload), systemctl restart nginx (stop + start, brief downtime), systemctl status nginx (check running status). Essential workflow: (1) Edit config files, (2) nginx -t (test), (3) nginx -s reload (apply). Common test errors: Syntax error (missing semicolon, unclosed brace), invalid directive name, wrong context (directive in wrong block), file permissions (cannot access include files), upstream conflicts. Best practice (2025): Always run nginx -t before reload, use systemctl reload for managed deployments, monitor error logs after reload (tail -f /var/log/nginx/error.log), implement config validation in CI/CD pipelines, test in staging before production. Debug: nginx -V (show compile-time options and modules), nginx -T (dump full configuration including includes).

99% confidence
A

Nginx (nginx 1.26+ stable as of 2025) is open-source web server, reverse proxy, load balancer, and HTTP cache using event-driven architecture. Key features: handles 10,000+ concurrent connections efficiently with low resource usage, reverse proxy for HTTP/HTTPS/TCP/UDP protocols, load balancing with 6 algorithms (round-robin, least_conn, ip_hash, hash, random, random two choices), SSL/TLS 1.2/1.3 termination, HTTP/2 support and experimental HTTP/3 with QUIC (nginx 1.25+), proxy caching, WebSocket proxying, rate limiting with leaky bucket algorithm, gzip/brotli compression, URL rewriting. Use cases: web server (superior static file performance vs Apache), reverse proxy for Node.js/Python/Java apps, API gateway, load balancer, media streaming. Configuration: directive-based syntax in /etc/nginx/nginx.conf. Performance: Event-driven architecture enables high concurrency with minimal memory footprint.

99% confidence
A

Nginx config uses directive-based syntax with hierarchical contexts (nginx 1.26+). Main contexts: main (global settings like user, worker_processes), events (connection processing: worker_connections), http (HTTP server settings: gzip, keepalive), server (virtual host definitions), location (URI-specific routing and handling). Syntax rules: Directives end with semicolon (;), blocks use curly braces {}, include files with include /etc/nginx/conf.d/*.conf;, comments start with #. Main config file: /etc/nginx/nginx.conf. Essential commands: nginx -t (test config syntax), nginx -s reload (graceful reload without downtime), nginx -s stop (fast shutdown), nginx -s quit (graceful shutdown). Example structure: http { upstream backend { server srv1:8080; } server { listen 80; server_name example.com; location / { root /var/www; try_files $uri $uri/ =404; } location /api/ { proxy_pass http://backend; } } }. Best practice (2025): Modular configs in /etc/nginx/conf.d/, test before reload.

99% confidence
A

Server blocks (virtual hosts) define separate virtual servers on single nginx instance, enabling multi-site hosting. Syntax: server { listen 80; server_name example.com www.example.com; root /var/www/example; }. listen directive: Binds to IP:port (listen 80; for HTTP, listen 443 ssl http2; for HTTPS with HTTP/2). Multiple listen directives supported. default_server flag: Marks catch-all server for unmatched requests (listen 80 default_server;). server_name directive: Defines virtual host names with multiple formats: exact names (example.com), wildcard prefix (.example.com), wildcard suffix (mail.), regex (~^(?\w+).example.com$). Server selection algorithm: (1) Exact name match, (2) Longest wildcard prefix (.example.com), (3) Longest wildcard suffix (mail.), (4) First matching regex (order matters), (5) Default server. Server names stored in hash tables for O(1) lookup. Best practice (2025): Use exact names for performance, one default_server per listen port, separate server blocks per domain for clarity. Essential for hosting multiple sites on single server.

99% confidence
A

location blocks configure URI-specific request handling using modifiers and priority matching algorithm. Syntax: location [modifier] pattern { directives; }. Modifiers: = (exact match, highest priority), ^~ (prefix match that stops regex evaluation), ~ (case-sensitive regex), * (case-insensitive regex), none (standard prefix match). Matching algorithm (nginx 1.26+): (1) Check all exact matches (=), use immediately if found. (2) Find longest prefix match using red-black tree O(log n). If uses ^, stop and use it. (3) Evaluate regex locations (~ and *) sequentially in config order, use first match. (4) If no regex matched, use stored longest prefix. Example: location = /login { } (exact), location ^ /static/ { } (prefix, no regex), location ~* .(jpg|png|gif)$ { } (case-insensitive regex), location / { } (catch-all). Variables available: $uri (normalized URI), $request_uri (original with query string), $args (query parameters). Nesting supported: location /api/ { location ~ .json$ { } }. Best practice (2025): Use exact/prefix for performance (O(log n)), regex only when necessary (O(n) sequential), place specific matches before general ones.

99% confidence
A

Reverse proxy forwards client requests to backend servers, acting as intermediary for security, performance, and scalability (nginx 1.26+ recommended). Basic configuration: upstream backend { server backend1.example.com:8080; server backend2.example.com:8080; } server { listen 80; location /api/ { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Connection ""; } }. Essential headers: Host (preserves original host), X-Real-IP (client IP), X-Forwarded-For (proxy chain), X-Forwarded-Proto (original protocol http/https). Performance directives: proxy_http_version 1.1; (enable keepalive to backends), proxy_set_header Connection ""; (clear connection header for upstream keepalive), proxy_buffering on; (default, buffers responses). Benefits: SSL/TLS termination at proxy, load balancing across backends, caching for performance, security (hide backend infrastructure), single entry point for microservices. Use cases: Node.js/Python/Java apps, API gateways, microservices. Best practice (2025): Use upstream blocks for multiple backends, enable HTTP/1.1 and keepalive for performance, set appropriate timeouts (proxy_read_timeout, proxy_connect_timeout).

99% confidence
A

Nginx distributes traffic across multiple backend servers using upstream blocks and 6 load balancing algorithms (nginx 1.26+). Configuration: upstream backend { least_conn; server backend1.example.com:8080 weight=3 max_fails=3 fail_timeout=30s; server backend2.example.com:8080; server backend3.example.com:8080 backup; }. Algorithms: (1) round-robin (default, sequential distribution with weight support), (2) least_conn (routes to server with fewest active connections), (3) ip_hash (sticky sessions based on client IP hash), (4) hash $variable (custom key like URI or cookie), (5) random (random selection), (6) random two choices (nginx 1.15.1+, select 2 random servers, route to one with fewer connections). Server parameters: weight (relative weight, default 1), max_fails (mark down after N failures, default 1), fail_timeout (time to mark down, default 10s), backup (only used when primary servers fail), down (permanently mark unavailable). Passive health checks: Monitor actual requests, mark down after max_fails in fail_timeout window. Use with: location /api/ { proxy_pass http://backend; }. Benefits: High availability, horizontal scalability, no SPOF, automatic failover. Best practice (2025): Use least_conn for balanced load, set max_fails=3 fail_timeout=30s for stability, add backup servers for critical services.

99% confidence
A

SSL/TLS encrypts HTTP traffic (nginx 1.13.0+ required for TLS 1.3, OpenSSL 1.1.1+ required). 2025 best practice configuration: server { listen 443 ssl http2; server_name example.com; ssl_certificate /etc/ssl/certs/example.com.crt; ssl_certificate_key /etc/ssl/private/example.com.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers off; ssl_ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-RSA-AES256-GCM-SHA384; ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/ssl/certs/ca-bundle.crt; resolver 8.8.8.8 8.8.4.4 valid=300s; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; }. HTTP to HTTPS redirect: server { listen 80; return 301 https://$host$request_uri; }. Key directives: ssl_protocols (TLS 1.2/1.3 only, disable older versions), ssl_prefer_server_ciphers off (let client choose TLS 1.3 ciphers), ssl_session_cache (improves performance, 1MB stores ~4000 sessions), OCSP stapling (validates certificate without client contacting CA), HSTS (force HTTPS, preload for browser list). Certificate sources: Let's Encrypt (free, auto-renewal with certbot). Requirements: nginx 1.13.0+ for TLS 1.3, OpenSSL 1.1.1+ for modern ciphers. Security: A+ rating on SSL Labs with this config.

99% confidence
A

try_files checks file existence in specified order, serves first match, essential for SPAs with client-side routing (React, Vue, Angular). Syntax: try_files file1 file2 ... uri | =code;. SPA configuration (2025 best practice): server { listen 80; root /usr/share/nginx/html; index index.html; location / { try_files $uri $uri/ /index.html; } }. How it works: (1) Check if $uri exists as file (e.g., /static/logo.png), (2) Check if $uri/ exists as directory with index, (3) Fallback to /index.html (triggers internal redirect). Why needed: SPAs handle routing client-side (/dashboard, /profile routes don't exist as files), direct access or refresh causes 404 without try_files, this config returns index.html which loads JS router. Named location fallback: location / { try_files $uri @backend; } location @backend { proxy_pass http://api:3000; } (try file, fallback to backend). Static files with error code: try_files $uri $uri/ =404; (return 404 if not found instead of fallback). Performance: More efficient than if statements (avoid "if is evil" anti-pattern), evaluated at file system level. Use cases: React/Vue/Angular SPAs, static sites with fallback, clean URLs without .html extension, API fallback patterns. Best practice (2025): Always use for SPAs, add $uri/ for directory index support, combine with caching headers for static assets.

99% confidence
A

Nginx proxy caching stores backend responses on disk for fast redelivery without hitting origin servers (nginx 1.0+). Two-step setup: (1) proxy_cache_path in http context defines cache location and parameters. (2) proxy_cache in location context enables caching. Complete example: http { proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; } server { location / { proxy_cache my_cache; proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; proxy_cache_key $scheme$proxy_host$uri$is_args$args; proxy_pass http://backend; add_header X-Cache-Status $upstream_cache_status; } }. Key directives: levels (2-level directory hierarchy prevents performance degradation), keys_zone (shared memory zone storing cache metadata, 1MB stores ~8,000 keys), inactive (remove unaccessed entries after duration), use_temp_path=off (write directly to cache, not temporary files), proxy_cache_valid (set validity period per status code). Mechanism: Cache manager process evicts least-recently-used items when max_size exceeded. Respects origin Cache-Control headers. Performance: proxy_cache_lock (one request fills cache, others wait), proxy_cache_background_update (refresh stale content asynchronously), proxy_cache_use_stale (serve expired content during backend failures). Monitoring headers: $upstream_cache_status returns HIT, MISS, BYPASS, EXPIRED. Best practice (2025): Always specify proxy_cache_valid per status code, enable cache locking to prevent thundering herd, implement background updates for zero-downtime refreshes.

99% confidence
A

rewrite modifies request URI using regex patterns, but return directive is preferred for simple redirects (2025 best practice). Syntax: rewrite regex replacement [flag];. Flags: last (stop rewrite processing, search new location block), break (stop processing, serve from current location), redirect (return 302 temporary), permanent (return 301 permanent). Example: rewrite ^/old-page$ /new-page permanent; rewrite ^/product/(\d+)$ /products?id=$1 last;. Regex captures: Use $1, $2... for captured groups. return directive (cleaner alternative): return 301 https://$host$request_uri; (HTTP to HTTPS), return 301 /new-url; (simple redirect). When to use each: Use return for simple redirects (faster, more efficient), use rewrite for complex regex transformations, use try_files for file existence checks. Evaluation order: Rewrites execute sequentially within location block, processed before other directives. Best practice (2025): Prefer return over rewrite when possible (cleaner, faster), avoid complex rewrite chains (performance impact), use try_files for SPAs instead of rewrites, test with nginx -t before deployment. Common use cases: SEO-friendly URLs (rewrite ^/blog/(\w+)$ /blog.php?slug=$1 last;), forcing www subdomain (return 301 https://www.$host$request_uri;), canonical URLs (permanent redirects). Performance: return is faster than rewrite for simple redirects, minimize regex complexity.

99% confidence
A

Nginx excels at static file serving using zero-copy sendfile() system call (6Gbps → 30Gbps throughput per Netflix). 2025 optimized configuration: location / { root /var/www/html; sendfile on; sendfile_max_chunk 512k; tcp_nopush on; tcp_nodelay on; } location ~* .(jpg|jpeg|png|gif|ico|svg|webp)$ { expires 1y; add_header Cache-Control "public, immutable"; } location ~* .(css|js)$ { expires 1y; add_header Cache-Control "public, immutable"; gzip_static on; }. Key directives: sendfile on; (use OS sendfile() for zero-copy transfer, dramatically faster than read/write), sendfile_max_chunk 512k; (prevent one connection monopolizing worker, improves fairness), tcp_nopush on; (send full packets, works with sendfile to batch headers+data in one packet), tcp_nodelay on; (disable 200ms Nagle delay for last packet), gzip_static on; (serve pre-compressed .gz files if exist). Caching headers: expires 1y; for versioned assets (main.abc123.js), expires 1h; for dynamic content, add_header Cache-Control "public, immutable"; (browser never revalidates). File pattern matching: Use location ~* regex for case-insensitive extensions. Best practice (2025): Enable sendfile + tcp_nopush + tcp_nodelay together, pre-compress static assets (css.gz, js.gz) with gzip_static, use long cache times with versioned filenames, serve WebP/AVIF images for modern browsers. Performance: Nginx 10x faster than Node.js/Python for static files.

99% confidence
A

Nginx request rate limiting uses the leaky bucket algorithm to protect APIs and endpoints from abuse and DDoS attacks (nginx 1.0+). Two-step configuration: (1) Define zone in http context: limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; where rate=10r/s means 1 request per 100ms. (2) Apply limit in location context: limit_req zone=api burst=5 nodelay;. Critical parameters: rate (requests per second r/s or per minute r/m), burst (queues up to N requests exceeding the rate limit), nodelay (forwards burst requests immediately instead of spacing them out, essential for APIs and login endpoints). Without nodelay, excess requests are queued and delayed sequentially, degrading user experience. Returns 503 Service Unavailable when hard limit exceeded. Configuration examples: Sensitive endpoints: limit_req zone=api burst=3 nodelay; (tight limits for /login), General APIs: limit_req zone=api burst=10-20 nodelay;. Memory efficiency: Use $binary_remote_addr (4/16 bytes) instead of $remote_addr (7-15 bytes). Zone sizing: 10m stores ~160,000 IP addresses. Best practice (2025): Always combine burst+nodelay together, adjust burst based on endpoint sensitivity, enable detailed logging to monitor violations.

99% confidence
A

Nginx variables store values for dynamic configuration, evaluated lazily at runtime. Built-in variables: $uri (normalized request URI without args), $request_uri (original URI with query string), $args (query parameters), $host (Host header or server name), $remote_addr (client IP, 7-15 bytes), $binary_remote_addr (binary IP, 4/16 bytes for IPv4/IPv6, memory efficient), $scheme (http or https), $request_method (GET/POST/etc), $http_name (request header, e.g., $http_user_agent), $upstream_cache_status (HIT/MISS/BYPASS), $server_name, $document_root. Custom variables: set $variable value; (simple assignment), map $http_upgrade $connection_upgrade { default upgrade; '' close; } (conditional logic, nginx 0.9.0+). Use cases: proxy_set_header X-Real-IP $remote_addr; (forward client IP), return 301 https://$host$request_uri; (redirect preserving URL), log_format main '$remote_addr - $request "$status"'; (access logs), if ($request_method = POST) { return 405; } (conditional processing). Best practice (2025): Use map directive instead of if for complex conditions (map evaluated once at startup, more efficient), use $binary_remote_addr in rate limiting (saves memory), avoid if when possible ("if is evil" - limited context support). Variable interpolation: Supported in strings since nginx 1.11.0 (set $full_url "https://$host$uri";). Scope: Variables are global, accessible from any context once defined.

99% confidence
A

Nginx proxies WebSocket connections using the Upgrade and Connection hop-by-hop headers that must be explicitly forwarded to backends. Minimal configuration: location /ws/ { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; }. Critical: proxy_http_version 1.1 is mandatory—HTTP 1.0 doesn't support WebSocket upgrade negotiation. For both HTTP and WebSocket traffic on same location, use map directive: map $http_upgrade $connection_upgrade { default upgrade; '' close; } then proxy_set_header Connection $connection_upgrade; This ensures connections close properly when no upgrade is requested. Default proxy_read_timeout is 60 seconds, which terminates idle connections—increase to 3600s (1 hour) or implement backend ping/pong frames for keep-alive. Compatible with Socket.io, ws library, native WebSocket API.

99% confidence
A

root and alias serve files with different path resolution, critical distinction to avoid 404 errors. root directive: Appends full URI to root path. Example: root /var/www/html; location /images/ { } → Request /images/logo.png serves /var/www/html/images/logo.png. Path = root + full_URI. alias directive: Replaces location part of URI with alias path. Example: location /images/ { alias /data/pictures/; } → Request /images/logo.png serves /data/pictures/logo.png. Path = alias + (URI - location). Key differences: root appends entire URI (including location), alias substitutes location prefix. Common error: alias must end with / if location ends with / (location /images/ { alias /data/pictures/; } ✓, alias /data/pictures ✗). Performance: root is faster (simple string concatenation vs substring replacement). Use cases for root: Standard directory serving where URI matches filesystem structure, most common use case (location / { root /var/www/html; }). Use cases for alias: Remapping URIs to different filesystem paths (location /static/ { alias /srv/assets/; }), serving files from outside webroot, legacy path compatibility. Best practice (2025): Use root by default (simpler, faster), use alias only when URI path differs from filesystem path, always match trailing / between location and alias. Example combining both: location / { root /var/www; } location /downloads/ { alias /mnt/storage/files/; }.

99% confidence
A

Nginx supports gzip (universal) and brotli (20-30% better compression, modern browsers), best practice is enabling both (2025). gzip configuration: http { gzip on; gzip_vary on; gzip_comp_level 5; gzip_min_length 1000; gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss application/atom+xml image/svg+xml; }. Brotli configuration (requires ngx_brotli module): brotli on; brotli_comp_level 6; brotli_types text/plain text/css application/json application/javascript application/xml+rss; brotli_static on;. Key directives: gzip_comp_level 5-6 (balance compression/CPU, 1=fast/low, 9=slow/high), gzip_min_length 1000; (skip files <1KB, overhead not worth it), gzip_vary on; (add Vary: Accept-Encoding header for caching proxies), gzip_types (MIME types to compress, text/html always included), gzip_static on; (serve pre-compressed .gz files). Brotli advantages: 15-25% better compression than gzip, superior for UTF-8 text (HTML/CSS/JS), improves Core Web Vitals scores. Use both: Serve brotli to modern browsers (automatic fallback to gzip for legacy clients). Don't compress: Images (jpg/png), videos (mp4), already compressed formats (overhead without benefit). Best practice (2025): Enable both gzip and brotli, use level 4-6 for dynamic content (real-time compression), pre-compress static assets at build time (brotli_static, gzip_static), prioritize brotli for mobile performance.

99% confidence
A

error_page directive creates internal redirects for HTTP error codes without browser redirect. Syntax: error_page code [codes...] [=response] uri;. Basic example: error_page 404 /404.html; error_page 500 502 503 504 /50x.html;. Protect error pages from direct access: location = /404.html { internal; root /var/www/html; }. The internal directive prevents external requests to error URIs. Modify response code returned to client: error_page 404 =200 /empty.gif; returns HTTP 200 instead of 404. For complex handling, use named location fallback: error_page 404 @fallback; location @fallback { proxy_pass http://backend; }. Critical: error_page performs internal redirect (URI changes, method becomes GET), not HTTP redirect (no 3xx status). Best practice (2025): Always use internal directive, store error pages locally, combine with error logging for monitoring.

99% confidence
A

Nginx offers passive health checks (open source, included) and active health checks (Plus only). PASSIVE CHECKS: Monitor actual client requests to upstream servers. Mark server unavailable after max_fails consecutive failures occur within fail_timeout window. Configuration: upstream backend { server srv1.com max_fails=3 fail_timeout=30s; server srv2.com backup; }. Mechanism: If srv1 fails 3 times in 30 seconds, marked down for 30 seconds, then retried. Parameters: max_fails (default 1), fail_timeout (default 10s, sets both detection window and recovery duration). Additional parameter: slow_start=30s gradually ramps traffic to recovering servers. ACTIVE CHECKS (Plus only): Send periodic health probes independent of client requests. Configuration: upstream backend { zone upstream_zone 64k; server srv1.com; } location /api/ { health_check uri=/health interval=5s fails=3 passes=2; }. Mechanism: Sends requests every 5 seconds to /health, marks down after 3 failures, recovers after 2 successes. Custom match rules define pass criteria (status codes, headers, body content). Benefits: Detect failures before affecting clients, customizable health endpoints. Best practice (2025): Use passive checks with max_fails=3 fail_timeout=30s for open-source production, add backup servers for critical services, upgrade to Plus for active checks if needing proactive monitoring.

99% confidence
A

Nginx access control uses allow/deny directives (from ngx_http_access_module) for IP-based restrictions and auth_basic/auth_basic_user_file (from ngx_http_auth_basic_module) for HTTP authentication. IP-based syntax: allow address|CIDR|all; deny address|CIDR|all; Rules evaluated sequentially, first match stops processing—place deny all; last to block remaining IPs. Example: location /admin/ { allow 192.168.1.0/24; allow 10.0.0.1; deny all; } allows subnet 192.168.1.0/24 and IP 10.0.0.1, denies everything else. HTTP basic auth example: auth_basic "Restricted Area"; auth_basic_user_file /etc/nginx/.htpasswd; (generate with htpasswd -c /etc/nginx/.htpasswd username). Combine methods with satisfy directive: satisfy all; requires BOTH IP whitelist AND valid credentials. satisfy any; requires EITHER approved IP OR valid credentials. Production example: location /api/ { satisfy all; allow 192.168.1.0/24; deny all; auth_basic "API Access"; auth_basic_user_file conf/htpasswd; } Multi-layer security: IP for infrastructure, basic auth for humans, external IP lists possible via geo module. Geo-blocking (requires MaxMind GeoIP2 database): geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb; map $geoip2_data_country_iso_code $country { ~^CN|RU$ block; default pass; } if ($country = block) { return 403; }. Best practice (2025): Use satisfy all; for sensitive endpoints, store .htpasswd with 600 permissions, validate IPs in staging, combine with rate limiting for DDoS protection.

99% confidence
A

Nginx performance tuning (2025): Set worker_processes auto; to match CPU cores for optimal throughput. Configure worker_connections 1024-4096; (default 512, test incrementally under load—each worker handles this many concurrent connections). Enable zero-copy file transfer: sendfile on; tcp_nopush on; tcp_nodelay on; (dramatically improves static file performance, 6Gbps → 30Gbps throughput). Keepalive for clients: keepalive_timeout 15s; keepalive_requests 100; (reuse connections, reduce TCP handshake overhead). Keepalive for upstream backends: keepalive 32; inside upstream block (persistent connections reduce latency). Timeouts: client_body_timeout 10s; client_header_timeout 10s; send_timeout 10s; (prevent slow clients blocking worker processes). Compression: gzip on; gzip_comp_level 4; gzip_types text/css application/javascript application/json; (level 4 optimal CPU/compression balance at ~95% compression vs. level 9). Buffer client uploads: client_body_buffer_size 128k; (process in memory before writing to disk). Cache file metadata: open_file_cache max=10000 inactive=20s; (caches file descriptors and stat info). Logging: access_log /var/log/nginx/access.log main buffer=32k flush=1m; (batch disk writes, reduce I/O overhead). Proxy buffering: proxy_buffering on; (default, buffers backend responses). Critical principle: Change one setting at a time, test under realistic load, revert if metrics don't improve. Most systems benefit from worker_processes auto + sendfile + tcp_nopush + tcp_nodelay alone.

99% confidence
A

HTTP/2 enables multiplexing, header compression, and improved performance. Configuration: listen 443 ssl http2; (simple addition to existing SSL configuration). Requirements: Nginx 1.9.5+, SSL/TLS mandatory, all modern browsers supported. Benefits: Multiple requests over single TCP connection without head-of-line blocking, HPACK header compression reduces overhead, automatic prioritization. HTTP/3 with QUIC (Nginx 1.25.0+): listen 443 quic reuseport; listen 443 ssl; add_header Alt-Svc 'h3=":443"; ma=86400'; (QUIC uses UDP, requires OpenSSL 3.5.1+). Performance tuning: http3_max_concurrent_streams 1024; http3_stream_buffer_size 1m; quic_gso on; quic_retry on;. Network: Allow UDP traffic on port 443 for HTTP/3. Best practice (2025): Enable HTTP/2 for all HTTPS sites (automatic fallback from HTTP/3), use HTTP/3 for cutting-edge performance, test with Alt-Svc header to advertise HTTP/3 support. Protocol negotiation automatic via ALPN.

99% confidence
A

Nginx logs record HTTP requests (access_log) and server errors (error_log). Defaults: access_log /var/log/nginx/access.log, error_log /var/log/nginx/error.log with level warn. Error log levels (increasing severity): debug, info, notice, warn, error, crit, alert, emerg. Configuration: error_log /var/log/nginx/error.log warn;. Access log format with log_format directive (http block): log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';. Critical 2025 best practice: JSON logging for structured observability (Elasticsearch/Splunk integration). log_format json escape=json '{"time":"$time_iso8601","remote_addr":"$remote_addr","request":"$request","status":$status,"bytes":$body_bytes_sent,"duration":$request_time,"upstream":"$upstream_response_time","user_agent":"$http_user_agent"}'; then access_log /var/log/nginx/access.log json;. Performance optimization: access_log /path buffer=32k flush=1m; reduces disk I/O, open_log_file_cache max=10000 inactive=20s; caches file descriptors. Conditional logging: map $status $loggable { ~^[23] 0; default 1; } access_log /path if=$loggable; logs only 4xx/5xx errors. Centralized logging: access_log syslog:server=127.0.0.1:514 json; or syslog:server=remote.example.com,facility=local7,tag=nginx json;. Best practice (2025): Use JSON format for logging infrastructure, enable buffering+open_log_file_cache for high-traffic, rotate logs with logrotate, include $request_time and $upstream_response_time for performance debugging, send to ELK/Splunk/CloudWatch.

99% confidence
A

CORS in Nginx requires setting 4 core headers: Access-Control-Allow-Origin (which origin), Access-Control-Allow-Methods (HTTP verbs), Access-Control-Allow-Headers (custom headers), Access-Control-Expose-Headers (response headers JS can access). Production configuration with origin validation: map $http_origin $allow_origin { ~^https://(www.)?example.com$ $http_origin; default ''; } location /api/ { add_header Access-Control-Allow-Origin $allow_origin always; add_header Access-Control-Allow-Methods 'GET, POST, PUT, DELETE, OPTIONS' always; add_header Access-Control-Allow-Headers 'Authorization, Content-Type, Accept' always; add_header Access-Control-Expose-Headers 'Content-Length, X-JSON-Response-Count' always; add_header Access-Control-Max-Age 86400 always; if ($request_method = OPTIONS) { return 204; } proxy_pass http://backend; }. Critical: OPTIONS preflight requests must return 204 No Content with CORS headers (use always flag for all status codes). For credentialed requests: add_header Access-Control-Allow-Credentials true always; but then must use specific origin, never wildcard '*'. Security: Always validate origins against whitelist, never echo arbitrary $http_origin without validation. Access-Control-Max-Age 86400 (1 day) caches preflight results, reducing redundant OPTIONS requests.

99% confidence
A

NGINX Open Source (free): Community-driven, passive health checks only (monitor actual requests, detect failures post-facto), no SLA support. Both versions: Identical core engine, reverse proxy, load balancing, SSL/TLS, compression, caching. NGINX Plus ($849-$2,099/year, 2025 pricing): Commercial subscription from F5 with enterprise features. Plus-exclusive capabilities: (1) Active health checks: Send periodic probes independently from client requests, detect failures before clients affected, customizable intervals/pass/fail criteria. (2) REST API: Add/modify/remove upstream servers without process reload (dynamic reconfiguration). (3) Live activity monitoring: Real-time dashboard with performance metrics, logs, upstream status. (4) Session persistence: Sticky session methods (sticky cookie, sticky route, sticky learn) for stateful apps. (5) JWT/OIDC authentication: Native OAuth/SAML support without third-party modules. (6) Key-value store, clustering, WAF integration ($2,000/year WAF addon). Use Plus if: Enterprise deployments requiring zero-downtime updates, need automated health management, require 24/7 F5 support, handling dynamic infrastructure (auto-scaling). Use open source if: Shared hosting, static topology, budget constraints, community support sufficient. Decision matrix: <100 servers → open source adequate; >100 servers or dynamic config → Plus value clear. Best practice (2025): Open source covers 90% of deployments; Plus becomes essential at enterprise scale with operational automation requirements.

99% confidence
A

Nginx provides zero-downtime configuration testing and reloading (nginx 1.26+). Test syntax: nginx -t (validates syntax and semantics, shows errors with file:line). Output example: nginx: configuration file /etc/nginx/nginx.conf test is successful. Test specific config: nginx -t -c /path/to/nginx.conf (test alternative config file). Graceful reload: nginx -s reload (loads new config without dropping connections, workers finish current requests before shutting down, new workers start with new config). Signal commands: nginx -s reload (graceful reload, HUP signal), nginx -s stop (fast shutdown, TERM signal), nginx -s quit (graceful shutdown, QUIT signal, wait for workers to finish), nginx -s reopen (reopen log files, USR1 signal, essential for log rotation). Service management (systemd): systemctl reload nginx (equivalent to nginx -s reload), systemctl restart nginx (stop + start, brief downtime), systemctl status nginx (check running status). Essential workflow: (1) Edit config files, (2) nginx -t (test), (3) nginx -s reload (apply). Common test errors: Syntax error (missing semicolon, unclosed brace), invalid directive name, wrong context (directive in wrong block), file permissions (cannot access include files), upstream conflicts. Best practice (2025): Always run nginx -t before reload, use systemctl reload for managed deployments, monitor error logs after reload (tail -f /var/log/nginx/error.log), implement config validation in CI/CD pipelines, test in staging before production. Debug: nginx -V (show compile-time options and modules), nginx -T (dump full configuration including includes).

99% confidence