nginx 82 Q&As

NGINX FAQ & Answers

82 expert NGINX answers researched from official documentation. Every answer cites authoritative sources you can verify.

unknown

25 questions
A

SSL/TLS encrypts HTTP traffic (nginx 1.13.0+ required for TLS 1.3, OpenSSL 1.1.1+ required). 2025 best practice configuration: server { listen 443 ssl http2; server_name example.com; ssl_certificate /etc/ssl/certs/example.com.crt; ssl_certificate_key /etc/ssl/private/example.com.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers off; ssl_ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-RSA-AES256-GCM-SHA384; ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/ssl/certs/ca-bundle.crt; resolver 8.8.8.8 8.8.4.4 valid=300s; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; }. HTTP to HTTPS redirect: server { listen 80; return 301 https://$host$request_uri; }. Key directives: ssl_protocols (TLS 1.2/1.3 only, disable older versions), ssl_prefer_server_ciphers off (let client choose TLS 1.3 ciphers), ssl_session_cache (improves performance, 1MB stores ~4000 sessions), OCSP stapling (validates certificate without client contacting CA), HSTS (force HTTPS, preload for browser list). Certificate sources: Let's Encrypt (free, auto-renewal with certbot). Requirements: nginx 1.13.0+ for TLS 1.3, OpenSSL 1.1.1+ for modern ciphers. Security: A+ rating on SSL Labs with this config.

95% confidence
A

Nginx request rate limiting uses the leaky bucket algorithm to protect APIs and endpoints from abuse and DDoS attacks (nginx 1.0+). Two-step configuration: (1) Define zone in http context: limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s; where rate=10r/s means 1 request per 100ms. (2) Apply limit in location context: limit_req zone=api burst=5 nodelay;. Critical parameters: rate (requests per second r/s or per minute r/m), burst (queues up to N requests exceeding the rate limit), nodelay (forwards burst requests immediately instead of spacing them out, essential for APIs and login endpoints). Without nodelay, excess requests are queued and delayed sequentially, degrading user experience. Returns 503 Service Unavailable when hard limit exceeded. Configuration examples: Sensitive endpoints: limit_req zone=api burst=3 nodelay; (tight limits for /login), General APIs: limit_req zone=api burst=10-20 nodelay;. Memory efficiency: Use $binary_remote_addr (4/16 bytes) instead of $remote_addr (7-15 bytes). Zone sizing: 10m stores ~160,000 IP addresses. Best practice (2025): Always combine burst+nodelay together, adjust burst based on endpoint sensitivity, enable detailed logging to monitor violations.

95% confidence
A

Nginx (nginx 1.26+ stable as of 2025) is open-source web server, reverse proxy, load balancer, and HTTP cache using event-driven architecture. Key features: handles 10,000+ concurrent connections efficiently with low resource usage, reverse proxy for HTTP/HTTPS/TCP/UDP protocols, load balancing with 6 algorithms (round-robin, least_conn, ip_hash, hash, random, random two choices), SSL/TLS 1.2/1.3 termination, HTTP/2 support and experimental HTTP/3 with QUIC (nginx 1.25+), proxy caching, WebSocket proxying, rate limiting with leaky bucket algorithm, gzip/brotli compression, URL rewriting. Use cases: web server (superior static file performance vs Apache), reverse proxy for Node.js/Python/Java apps, API gateway, load balancer, media streaming. Configuration: directive-based syntax in /etc/nginx/nginx.conf. Performance: Event-driven architecture enables high concurrency with minimal memory footprint.

95% confidence
A

Nginx provides zero-downtime configuration testing and reloading (nginx 1.26+). Test syntax: nginx -t (validates syntax and semantics, shows errors with file:line). Output example: nginx: configuration file /etc/nginx/nginx.conf test is successful. Test specific config: nginx -t -c /path/to/nginx.conf (test alternative config file). Graceful reload: nginx -s reload (loads new config without dropping connections, workers finish current requests before shutting down, new workers start with new config). Signal commands: nginx -s reload (graceful reload, HUP signal), nginx -s stop (fast shutdown, TERM signal), nginx -s quit (graceful shutdown, QUIT signal, wait for workers to finish), nginx -s reopen (reopen log files, USR1 signal, essential for log rotation). Service management (systemd): systemctl reload nginx (equivalent to nginx -s reload), systemctl restart nginx (stop + start, brief downtime), systemctl status nginx (check running status). Essential workflow: (1) Edit config files, (2) nginx -t (test), (3) nginx -s reload (apply). Common test errors: Syntax error (missing semicolon, unclosed brace), invalid directive name, wrong context (directive in wrong block), file permissions (cannot access include files), upstream conflicts. Best practice (2025): Always run nginx -t before reload, use systemctl reload for managed deployments, monitor error logs after reload (tail -f /var/log/nginx/error.log), implement config validation in CI/CD pipelines, test in staging before production. Debug: nginx -V (show compile-time options and modules), nginx -T (dump full configuration including includes).

95% confidence
A

Reverse proxy forwards client requests to backend servers, acting as intermediary for security, performance, and scalability (nginx 1.26+ recommended). Basic configuration: upstream backend { server backend1.example.com:8080; server backend2.example.com:8080; } server { listen 80; location /api/ { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Connection ""; } }. Essential headers: Host (preserves original host), X-Real-IP (client IP), X-Forwarded-For (proxy chain), X-Forwarded-Proto (original protocol http/https). Performance directives: proxy_http_version 1.1; (enable keepalive to backends), proxy_set_header Connection ""; (clear connection header for upstream keepalive), proxy_buffering on; (default, buffers responses). Benefits: SSL/TLS termination at proxy, load balancing across backends, caching for performance, security (hide backend infrastructure), single entry point for microservices. Use cases: Node.js/Python/Java apps, API gateways, microservices. Best practice (2025): Use upstream blocks for multiple backends, enable HTTP/1.1 and keepalive for performance, set appropriate timeouts (proxy_read_timeout, proxy_connect_timeout).

95% confidence
A

error_page directive creates internal redirects for HTTP error codes without browser redirect. Syntax: error_page code [codes...] [=response] uri;. Basic example: error_page 404 /404.html; error_page 500 502 503 504 /50x.html;. Protect error pages from direct access: location = /404.html { internal; root /var/www/html; }. The internal directive prevents external requests to error URIs. Modify response code returned to client: error_page 404 =200 /empty.gif; returns HTTP 200 instead of 404. For complex handling, use named location fallback: error_page 404 @fallback; location @fallback { proxy_pass http://backend; }. Critical: error_page performs internal redirect (URI changes, method becomes GET), not HTTP redirect (no 3xx status). Best practice (2025): Always use internal directive, store error pages locally, combine with error logging for monitoring.

95% confidence
A

Nginx supports gzip (universal) and brotli (20-30% better compression, modern browsers), best practice is enabling both (2025). gzip configuration: http { gzip on; gzip_vary on; gzip_comp_level 5; gzip_min_length 1000; gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss application/atom+xml image/svg+xml; }. Brotli configuration (requires ngx_brotli module): brotli on; brotli_comp_level 6; brotli_types text/plain text/css application/json application/javascript application/xml+rss; brotli_static on;. Key directives: gzip_comp_level 5-6 (balance compression/CPU, 1=fast/low, 9=slow/high), gzip_min_length 1000; (skip files <1KB, overhead not worth it), gzip_vary on; (add Vary: Accept-Encoding header for caching proxies), gzip_types (MIME types to compress, text/html always included), gzip_static on; (serve pre-compressed .gz files). Brotli advantages: 15-25% better compression than gzip, superior for UTF-8 text (HTML/CSS/JS), improves Core Web Vitals scores. Use both: Serve brotli to modern browsers (automatic fallback to gzip for legacy clients). Don't compress: Images (jpg/png), videos (mp4), already compressed formats (overhead without benefit). Best practice (2025): Enable both gzip and brotli, use level 4-6 for dynamic content (real-time compression), pre-compress static assets at build time (brotli_static, gzip_static), prioritize brotli for mobile performance.

95% confidence
A

Nginx distributes traffic across multiple backend servers using upstream blocks and 6 load balancing algorithms (nginx 1.26+). Configuration: upstream backend { least_conn; server backend1.example.com:8080 weight=3 max_fails=3 fail_timeout=30s; server backend2.example.com:8080; server backend3.example.com:8080 backup; }. Algorithms: (1) round-robin (default, sequential distribution with weight support), (2) least_conn (routes to server with fewest active connections), (3) ip_hash (sticky sessions based on client IP hash), (4) hash $variable (custom key like URI or cookie), (5) random (random selection), (6) random two choices (nginx 1.15.1+, select 2 random servers, route to one with fewer connections). Server parameters: weight (relative weight, default 1), max_fails (mark down after N failures, default 1), fail_timeout (time to mark down, default 10s), backup (only used when primary servers fail), down (permanently mark unavailable). Passive health checks: Monitor actual requests, mark down after max_fails in fail_timeout window. Use with: location /api/ { proxy_pass http://backend; }. Benefits: High availability, horizontal scalability, no SPOF, automatic failover. Best practice (2025): Use least_conn for balanced load, set max_fails=3 fail_timeout=30s for stability, add backup servers for critical services.

95% confidence
A

rewrite modifies request URI using regex patterns, but return directive is preferred for simple redirects (2025 best practice). Syntax: rewrite regex replacement [flag];. Flags: last (stop rewrite processing, search new location block), break (stop processing, serve from current location), redirect (return 302 temporary), permanent (return 301 permanent). Example: rewrite ^/old-page$ /new-page permanent; rewrite ^/product/(\d+)$ /products?id=$1 last;. Regex captures: Use $1, $2... for captured groups. return directive (cleaner alternative): return 301 https://$host$request_uri; (HTTP to HTTPS), return 301 /new-url; (simple redirect). When to use each: Use return for simple redirects (faster, more efficient), use rewrite for complex regex transformations, use try_files for file existence checks. Evaluation order: Rewrites execute sequentially within location block, processed before other directives. Best practice (2025): Prefer return over rewrite when possible (cleaner, faster), avoid complex rewrite chains (performance impact), use try_files for SPAs instead of rewrites, test with nginx -t before deployment. Common use cases: SEO-friendly URLs (rewrite ^/blog/(\w+)$ /blog.php?slug=$1 last;), forcing www subdomain (return 301 https://www.$host$request_uri;), canonical URLs (permanent redirects). Performance: return is faster than rewrite for simple redirects, minimize regex complexity.

95% confidence
A

Nginx variables store values for dynamic configuration, evaluated lazily at runtime. Built-in variables: $uri (normalized request URI without args), $request_uri (original URI with query string), $args (query parameters), $host (Host header or server name), $remote_addr (client IP, 7-15 bytes), $binary_remote_addr (binary IP, 4/16 bytes for IPv4/IPv6, memory efficient), $scheme (http or https), $request_method (GET/POST/etc), $http_name (request header, e.g., $http_user_agent), $upstream_cache_status (HIT/MISS/BYPASS), $server_name, $document_root. Custom variables: set $variable value; (simple assignment), map $http_upgrade $connection_upgrade { default upgrade; '' close; } (conditional logic, nginx 0.9.0+). Use cases: proxy_set_header X-Real-IP $remote_addr; (forward client IP), return 301 https://$host$request_uri; (redirect preserving URL), log_format main '$remote_addr - $request "$status"'; (access logs), if ($request_method = POST) { return 405; } (conditional processing). Best practice (2025): Use map directive instead of if for complex conditions (map evaluated once at startup, more efficient), use $binary_remote_addr in rate limiting (saves memory), avoid if when possible ("if is evil" - limited context support). Variable interpolation: Supported in strings since nginx 1.11.0 (set $full_url "https://$host$uri";). Scope: Variables are global, accessible from any context once defined.

95% confidence
A

Nginx access control uses allow/deny directives (from ngx_http_access_module) for IP-based restrictions and auth_basic/auth_basic_user_file (from ngx_http_auth_basic_module) for HTTP authentication. IP-based syntax: allow address|CIDR|all; deny address|CIDR|all; Rules evaluated sequentially, first match stops processing—place deny all; last to block remaining IPs. Example: location /admin/ { allow 192.168.1.0/24; allow 10.0.0.1; deny all; } allows subnet 192.168.1.0/24 and IP 10.0.0.1, denies everything else. HTTP basic auth example: auth_basic "Restricted Area"; auth_basic_user_file /etc/nginx/.htpasswd; (generate with htpasswd -c /etc/nginx/.htpasswd username). Combine methods with satisfy directive: satisfy all; requires BOTH IP whitelist AND valid credentials. satisfy any; requires EITHER approved IP OR valid credentials. Production example: location /api/ { satisfy all; allow 192.168.1.0/24; deny all; auth_basic "API Access"; auth_basic_user_file conf/htpasswd; } Multi-layer security: IP for infrastructure, basic auth for humans, external IP lists possible via geo module. Geo-blocking (requires MaxMind GeoIP2 database): geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb; map $geoip2_data_country_iso_code $country { ~^CN|RU$ block; default pass; } if ($country = block) { return 403; }. Best practice (2025): Use satisfy all; for sensitive endpoints, store .htpasswd with 600 permissions, validate IPs in staging, combine with rate limiting for DDoS protection.

95% confidence
A

Server blocks (virtual hosts) define separate virtual servers on single nginx instance, enabling multi-site hosting. Syntax: server { listen 80; server_name example.com www.example.com; root /var/www/example; }. listen directive: Binds to IP:port (listen 80; for HTTP, listen 443 ssl http2; for HTTPS with HTTP/2). Multiple listen directives supported. default_server flag: Marks catch-all server for unmatched requests (listen 80 default_server;). server_name directive: Defines virtual host names with multiple formats: exact names (example.com), wildcard prefix (.example.com), wildcard suffix (mail.), regex (~^(?\w+).example.com$). Server selection algorithm: (1) Exact name match, (2) Longest wildcard prefix (.example.com), (3) Longest wildcard suffix (mail.), (4) First matching regex (order matters), (5) Default server. Server names stored in hash tables for O(1) lookup. Best practice (2025): Use exact names for performance, one default_server per listen port, separate server blocks per domain for clarity. Essential for hosting multiple sites on single server.

95% confidence
A

Nginx proxies WebSocket connections using the Upgrade and Connection hop-by-hop headers that must be explicitly forwarded to backends. Minimal configuration: location /ws/ { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; }. Critical: proxy_http_version 1.1 is mandatory—HTTP 1.0 doesn't support WebSocket upgrade negotiation. For both HTTP and WebSocket traffic on same location, use map directive: map $http_upgrade $connection_upgrade { default upgrade; '' close; } then proxy_set_header Connection $connection_upgrade; This ensures connections close properly when no upgrade is requested. Default proxy_read_timeout is 60 seconds, which terminates idle connections—increase to 3600s (1 hour) or implement backend ping/pong frames for keep-alive. Compatible with Socket.io, ws library, native WebSocket API.

95% confidence
A

HTTP/2 enables multiplexing, header compression, and improved performance. Configuration: listen 443 ssl http2; (simple addition to existing SSL configuration). Requirements: Nginx 1.9.5+, SSL/TLS mandatory, all modern browsers supported. Benefits: Multiple requests over single TCP connection without head-of-line blocking, HPACK header compression reduces overhead, automatic prioritization. HTTP/3 with QUIC (Nginx 1.25.0+): listen 443 quic reuseport; listen 443 ssl; add_header Alt-Svc 'h3=":443"; ma=86400'; (QUIC uses UDP, requires OpenSSL 3.5.1+). Performance tuning: http3_max_concurrent_streams 1024; http3_stream_buffer_size 1m; quic_gso on; quic_retry on;. Network: Allow UDP traffic on port 443 for HTTP/3. Best practice (2025): Enable HTTP/2 for all HTTPS sites (automatic fallback from HTTP/3), use HTTP/3 for cutting-edge performance, test with Alt-Svc header to advertise HTTP/3 support. Protocol negotiation automatic via ALPN.

95% confidence
A

location blocks configure URI-specific request handling using modifiers and priority matching algorithm. Syntax: location [modifier] pattern { directives; }. Modifiers: = (exact match, highest priority), ^~ (prefix match that stops regex evaluation), ~ (case-sensitive regex), * (case-insensitive regex), none (standard prefix match). Matching algorithm (nginx 1.26+): (1) Check all exact matches (=), use immediately if found. (2) Find longest prefix match using red-black tree O(log n). If uses ^, stop and use it. (3) Evaluate regex locations (~ and *) sequentially in config order, use first match. (4) If no regex matched, use stored longest prefix. Example: location = /login { } (exact), location ^ /static/ { } (prefix, no regex), location ~* .(jpg|png|gif)$ { } (case-insensitive regex), location / { } (catch-all). Variables available: $uri (normalized URI), $request_uri (original with query string), $args (query parameters). Nesting supported: location /api/ { location ~ .json$ { } }. Best practice (2025): Use exact/prefix for performance (O(log n)), regex only when necessary (O(n) sequential), place specific matches before general ones.

95% confidence
A

Nginx performance tuning (2025): Set worker_processes auto; to match CPU cores for optimal throughput. Configure worker_connections 1024-4096; (default 512, test incrementally under load—each worker handles this many concurrent connections). Enable zero-copy file transfer: sendfile on; tcp_nopush on; tcp_nodelay on; (dramatically improves static file performance, 6Gbps → 30Gbps throughput). Keepalive for clients: keepalive_timeout 15s; keepalive_requests 100; (reuse connections, reduce TCP handshake overhead). Keepalive for upstream backends: keepalive 32; inside upstream block (persistent connections reduce latency). Timeouts: client_body_timeout 10s; client_header_timeout 10s; send_timeout 10s; (prevent slow clients blocking worker processes). Compression: gzip on; gzip_comp_level 4; gzip_types text/css application/javascript application/json; (level 4 optimal CPU/compression balance at ~95% compression vs. level 9). Buffer client uploads: client_body_buffer_size 128k; (process in memory before writing to disk). Cache file metadata: open_file_cache max=10000 inactive=20s; (caches file descriptors and stat info). Logging: access_log /var/log/nginx/access.log main buffer=32k flush=1m; (batch disk writes, reduce I/O overhead). Proxy buffering: proxy_buffering on; (default, buffers backend responses). Critical principle: Change one setting at a time, test under realistic load, revert if metrics don't improve. Most systems benefit from worker_processes auto + sendfile + tcp_nopush + tcp_nodelay alone.

95% confidence
A

Nginx excels at static file serving using zero-copy sendfile() system call (6Gbps → 30Gbps throughput per Netflix). 2025 optimized configuration: location / { root /var/www/html; sendfile on; sendfile_max_chunk 512k; tcp_nopush on; tcp_nodelay on; } location ~* .(jpg|jpeg|png|gif|ico|svg|webp)$ { expires 1y; add_header Cache-Control "public, immutable"; } location ~* .(css|js)$ { expires 1y; add_header Cache-Control "public, immutable"; gzip_static on; }. Key directives: sendfile on; (use OS sendfile() for zero-copy transfer, dramatically faster than read/write), sendfile_max_chunk 512k; (prevent one connection monopolizing worker, improves fairness), tcp_nopush on; (send full packets, works with sendfile to batch headers+data in one packet), tcp_nodelay on; (disable 200ms Nagle delay for last packet), gzip_static on; (serve pre-compressed .gz files if exist). Caching headers: expires 1y; for versioned assets (main.abc123.js), expires 1h; for dynamic content, add_header Cache-Control "public, immutable"; (browser never revalidates). File pattern matching: Use location ~* regex for case-insensitive extensions. Best practice (2025): Enable sendfile + tcp_nopush + tcp_nodelay together, pre-compress static assets (css.gz, js.gz) with gzip_static, use long cache times with versioned filenames, serve WebP/AVIF images for modern browsers. Performance: Nginx 10x faster than Node.js/Python for static files.

95% confidence
A

CORS in Nginx requires setting 4 core headers: Access-Control-Allow-Origin (which origin), Access-Control-Allow-Methods (HTTP verbs), Access-Control-Allow-Headers (custom headers), Access-Control-Expose-Headers (response headers JS can access). Production configuration with origin validation: map $http_origin $allow_origin { ~^https://(www.)?example.com$ $http_origin; default ''; } location /api/ { add_header Access-Control-Allow-Origin $allow_origin always; add_header Access-Control-Allow-Methods 'GET, POST, PUT, DELETE, OPTIONS' always; add_header Access-Control-Allow-Headers 'Authorization, Content-Type, Accept' always; add_header Access-Control-Expose-Headers 'Content-Length, X-JSON-Response-Count' always; add_header Access-Control-Max-Age 86400 always; if ($request_method = OPTIONS) { return 204; } proxy_pass http://backend; }. Critical: OPTIONS preflight requests must return 204 No Content with CORS headers (use always flag for all status codes). For credentialed requests: add_header Access-Control-Allow-Credentials true always; but then must use specific origin, never wildcard '*'. Security: Always validate origins against whitelist, never echo arbitrary $http_origin without validation. Access-Control-Max-Age 86400 (1 day) caches preflight results, reducing redundant OPTIONS requests.

95% confidence
A

root and alias serve files with different path resolution, critical distinction to avoid 404 errors. root directive: Appends full URI to root path. Example: root /var/www/html; location /images/ { } → Request /images/logo.png serves /var/www/html/images/logo.png. Path = root + full_URI. alias directive: Replaces location part of URI with alias path. Example: location /images/ { alias /data/pictures/; } → Request /images/logo.png serves /data/pictures/logo.png. Path = alias + (URI - location). Key differences: root appends entire URI (including location), alias substitutes location prefix. Common error: alias must end with / if location ends with / (location /images/ { alias /data/pictures/; } ✓, alias /data/pictures ✗). Performance: root is faster (simple string concatenation vs substring replacement). Use cases for root: Standard directory serving where URI matches filesystem structure, most common use case (location / { root /var/www/html; }). Use cases for alias: Remapping URIs to different filesystem paths (location /static/ { alias /srv/assets/; }), serving files from outside webroot, legacy path compatibility. Best practice (2025): Use root by default (simpler, faster), use alias only when URI path differs from filesystem path, always match trailing / between location and alias. Example combining both: location / { root /var/www; } location /downloads/ { alias /mnt/storage/files/; }.

95% confidence
A

Nginx proxy caching stores backend responses on disk for fast redelivery without hitting origin servers (nginx 1.0+). Two-step setup: (1) proxy_cache_path in http context defines cache location and parameters. (2) proxy_cache in location context enables caching. Complete example: http { proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; } server { location / { proxy_cache my_cache; proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; proxy_cache_key $scheme$proxy_host$uri$is_args$args; proxy_pass http://backend; add_header X-Cache-Status $upstream_cache_status; } }. Key directives: levels (2-level directory hierarchy prevents performance degradation), keys_zone (shared memory zone storing cache metadata, 1MB stores ~8,000 keys), inactive (remove unaccessed entries after duration), use_temp_path=off (write directly to cache, not temporary files), proxy_cache_valid (set validity period per status code). Mechanism: Cache manager process evicts least-recently-used items when max_size exceeded. Respects origin Cache-Control headers. Performance: proxy_cache_lock (one request fills cache, others wait), proxy_cache_background_update (refresh stale content asynchronously), proxy_cache_use_stale (serve expired content during backend failures). Monitoring headers: $upstream_cache_status returns HIT, MISS, BYPASS, EXPIRED. Best practice (2025): Always specify proxy_cache_valid per status code, enable cache locking to prevent thundering herd, implement background updates for zero-downtime refreshes.

95% confidence
A

Nginx config uses directive-based syntax with hierarchical contexts (nginx 1.26+). Main contexts: main (global settings like user, worker_processes), events (connection processing: worker_connections), http (HTTP server settings: gzip, keepalive), server (virtual host definitions), location (URI-specific routing and handling). Syntax rules: Directives end with semicolon (;), blocks use curly braces {}, include files with include /etc/nginx/conf.d/*.conf;, comments start with #. Main config file: /etc/nginx/nginx.conf. Essential commands: nginx -t (test config syntax), nginx -s reload (graceful reload without downtime), nginx -s stop (fast shutdown), nginx -s quit (graceful shutdown). Example structure: http { upstream backend { server srv1:8080; } server { listen 80; server_name example.com; location / { root /var/www; try_files $uri $uri/ =404; } location /api/ { proxy_pass http://backend; } } }. Best practice (2025): Modular configs in /etc/nginx/conf.d/, test before reload.

95% confidence
A

try_files checks file existence in specified order, serves first match, essential for SPAs with client-side routing (React, Vue, Angular). Syntax: try_files file1 file2 ... uri | =code;. SPA configuration (2025 best practice): server { listen 80; root /usr/share/nginx/html; index index.html; location / { try_files $uri $uri/ /index.html; } }. How it works: (1) Check if $uri exists as file (e.g., /static/logo.png), (2) Check if $uri/ exists as directory with index, (3) Fallback to /index.html (triggers internal redirect). Why needed: SPAs handle routing client-side (/dashboard, /profile routes don't exist as files), direct access or refresh causes 404 without try_files, this config returns index.html which loads JS router. Named location fallback: location / { try_files $uri @backend; } location @backend { proxy_pass http://api:3000; } (try file, fallback to backend). Static files with error code: try_files $uri $uri/ =404; (return 404 if not found instead of fallback). Performance: More efficient than if statements (avoid "if is evil" anti-pattern), evaluated at file system level. Use cases: React/Vue/Angular SPAs, static sites with fallback, clean URLs without .html extension, API fallback patterns. Best practice (2025): Always use for SPAs, add $uri/ for directory index support, combine with caching headers for static assets.

95% confidence
A

NGINX Open Source (free): Community-driven, passive health checks only (monitor actual requests, detect failures post-facto), no SLA support. Both versions: Identical core engine, reverse proxy, load balancing, SSL/TLS, compression, caching. NGINX Plus ($849-$2,099/year, 2025 pricing): Commercial subscription from F5 with enterprise features. Plus-exclusive capabilities: (1) Active health checks: Send periodic probes independently from client requests, detect failures before clients affected, customizable intervals/pass/fail criteria. (2) REST API: Add/modify/remove upstream servers without process reload (dynamic reconfiguration). (3) Live activity monitoring: Real-time dashboard with performance metrics, logs, upstream status. (4) Session persistence: Sticky session methods (sticky cookie, sticky route, sticky learn) for stateful apps. (5) JWT/OIDC authentication: Native OAuth/SAML support without third-party modules. (6) Key-value store, clustering, WAF integration ($2,000/year WAF addon). Use Plus if: Enterprise deployments requiring zero-downtime updates, need automated health management, require 24/7 F5 support, handling dynamic infrastructure (auto-scaling). Use open source if: Shared hosting, static topology, budget constraints, community support sufficient. Decision matrix: <100 servers → open source adequate; >100 servers or dynamic config → Plus value clear. Best practice (2025): Open source covers 90% of deployments; Plus becomes essential at enterprise scale with operational automation requirements.

95% confidence
A

Nginx offers passive health checks (open source, included) and active health checks (Plus only). PASSIVE CHECKS: Monitor actual client requests to upstream servers. Mark server unavailable after max_fails consecutive failures occur within fail_timeout window. Configuration: upstream backend { server srv1.com max_fails=3 fail_timeout=30s; server srv2.com backup; }. Mechanism: If srv1 fails 3 times in 30 seconds, marked down for 30 seconds, then retried. Parameters: max_fails (default 1), fail_timeout (default 10s, sets both detection window and recovery duration). Additional parameter: slow_start=30s gradually ramps traffic to recovering servers. ACTIVE CHECKS (Plus only): Send periodic health probes independent of client requests. Configuration: upstream backend { zone upstream_zone 64k; server srv1.com; } location /api/ { health_check uri=/health interval=5s fails=3 passes=2; }. Mechanism: Sends requests every 5 seconds to /health, marks down after 3 failures, recovers after 2 successes. Custom match rules define pass criteria (status codes, headers, body content). Benefits: Detect failures before affecting clients, customizable health endpoints. Best practice (2025): Use passive checks with max_fails=3 fail_timeout=30s for open-source production, add backup servers for critical services, upgrade to Plus for active checks if needing proactive monitoring.

95% confidence
A

Nginx logs record HTTP requests (access_log) and server errors (error_log). Defaults: access_log /var/log/nginx/access.log, error_log /var/log/nginx/error.log with level warn. Error log levels (increasing severity): debug, info, notice, warn, error, crit, alert, emerg. Configuration: error_log /var/log/nginx/error.log warn;. Access log format with log_format directive (http block): log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';. Critical 2025 best practice: JSON logging for structured observability (Elasticsearch/Splunk integration). log_format json escape=json '{"time":"$time_iso8601","remote_addr":"$remote_addr","request":"$request","status":$status,"bytes":$body_bytes_sent,"duration":$request_time,"upstream":"$upstream_response_time","user_agent":"$http_user_agent"}'; then access_log /var/log/nginx/access.log json;. Performance optimization: access_log /path buffer=32k flush=1m; reduces disk I/O, open_log_file_cache max=10000 inactive=20s; caches file descriptors. Conditional logging: map $status $loggable { ~^[23] 0; default 1; } access_log /path if=$loggable; logs only 4xx/5xx errors. Centralized logging: access_log syslog:server=127.0.0.1:514 json; or syslog:server=remote.example.com,facility=local7,tag=nginx json;. Best practice (2025): Use JSON format for logging infrastructure, enable buffering+open_log_file_cache for high-traffic, rotate logs with logrotate, include $request_time and $upstream_response_time for performance debugging, send to ELK/Splunk/CloudWatch.

95% confidence

Server Configuration

18 questions
A

Named locations are defined with @ prefix and can only be reached through internal redirects (error_page, try_files). Syntax: location @name { ... }. Example: error_page 404 = @notfound; location @notfound { return 404 'Page not found'; }. Or with try_files: location / { try_files $uri @backend; } location @backend { proxy_pass http://127.0.0.1:3000; }. Named locations cannot be accessed directly by clients. Useful for fallback handling, error pages, and proxy configurations. The = in error_page changes the response code to what the named location returns.

95% confidence
A

The alias directive replaces the matched location path with the specified directory, while root appends the full URI. With location /images/ { root /var/www; }, request /images/logo.png serves /var/www/images/logo.png. With location /images/ { alias /var/www/pictures/; }, the same request serves /var/www/pictures/logo.png - the /images/ part is replaced. Key difference: root appends URI, alias replaces location match. Alias requires trailing slash to work correctly. Don't use root and alias together in the same location. Use alias when URL path differs from filesystem path.

95% confidence
A

The server_name directive defines which hostnames a server block responds to. Syntax: server_name name1 [name2 ...];. Examples: server_name example.com www.example.com; (exact names), server_name .example.com; (wildcard prefix), server_name example.; (wildcard suffix), server_name ~^www\d+.example.com$; (regex). When a request arrives, Nginx matches the Host header against server_name in this priority: exact name, wildcard starting with *, wildcard ending with *, first matching regex. The name _ is a catch-all that matches no valid hostname, commonly used with default_server.

95% confidence
A

Use the listen directive inside a server block to specify the port. Basic syntax: listen 8080; for port 8080. The full format is listen

: [parameters]. Examples: listen 8080; (all interfaces on port 8080), listen 127.0.0.1:8080; (localhost only), listen *:8080; (explicit all interfaces). By default without listen directive, Nginx listens on port 80 (or 8000 if not running as root). IPv6 addresses require square brackets: listen [::]:8080;. Choose ports above 1024 for non-root users as ports below 1024 require root privileges.

95% confidence
A

Set the root directive to specify the document root directory. Basic server block: server { listen 8080; server_name localhost; root /var/www/html; index index.html; location / { try_files $uri $uri/ =404; } }. Ensure the directory exists and has correct permissions for the Nginx user (www-data on Debian/Ubuntu, nginx on CentOS/RHEL). Create directory: sudo mkdir -p /var/www/html. Set ownership: sudo chown -R www-data:www-data /var/www/html. Set permissions: sudo chmod -R 755 /var/www/html. The root directive can be in server or location blocks.

95% confidence
A

For Nginx 1.25.1 and newer, use the separate http2 directive inside the server block: server { listen 443 ssl; http2 on; ... }. For older versions, add http2 parameter to listen: listen 443 ssl http2;. HTTP/2 requires SSL/TLS. Key benefits: multiplexing (multiple requests over single connection), header compression, server push. Configuration: server { listen 443 ssl; http2 on; ssl_certificate /path/to/cert.pem; ssl_certificate_key /path/to/key.pem; }. The http2 parameter in listen directive is deprecated since Nginx 1.25.1 but still works for backward compatibility.

95% confidence
A

The root directive specifies the root directory for serving files. Nginx appends the request URI to this path to find the file. Example: with root /var/www/html; a request for /images/logo.png serves /var/www/html/images/logo.png. The directive can be placed in http, server, or location contexts. Server-level root acts as default when location blocks lack their own root. Use try_files $uri $uri/ =404; with root for proper 404 handling. Unlike alias (which replaces the URI path), root appends the full URI path to the specified directory.

95% confidence
A

Location modifiers determine matching behavior with this priority order: 1. = (exact match) - highest priority, stops immediately on match. 2. ^~ (preferential prefix) - stops regex evaluation if matched. 3. ~ (case-sensitive regex) and * (case-insensitive regex) - first matching regex wins. 4. (none) - prefix match, can be overridden by regex. Examples: location = / {} matches only root. location ^ /static/ {} prefix match, no regex check. location ~ .php$ {} case-sensitive regex. location * .(jpg|png)$ {} case-insensitive regex. location / {} default prefix catch-all. Nginx finds the longest prefix match first, then checks regex unless ^ was used.

95% confidence
A

Complete configuration for benchmark server: http { log_format benchmark '$time_local $request_method $request_uri $status "$http_user_agent"'; limit_req_zone $binary_remote_addr zone=benchmark:10m rate=10r/s; server { listen 8080; root /var/www/html; access_log /var/log/nginx/benchmark-access.log benchmark; error_log /var/log/nginx/benchmark-error.log; limit_req zone=benchmark burst=10 nodelay; limit_req_status 429; error_page 404 /404.html; location = /404.html { internal; } location / { try_files $uri $uri/ =404; } } }. Test with nginx -t before reload.

95% confidence
A

Create a server block with listen, root, and location directives. Example: server { listen 8080; root /var/www/html; location / { try_files $uri $uri/ =404; } }. The listen directive sets the port, root specifies the document root, and the location block handles request matching. The try_files directive checks if files exist before returning 404. Add server_name for virtual host matching. Place configuration in /etc/nginx/sites-available/ and symlink to sites-enabled/, or in /etc/nginx/conf.d/*.conf depending on your setup.

95% confidence
A

Nginx worker processes run as a non-privileged user (www-data on Debian/Ubuntu, nginx on CentOS/RHEL). Required permissions: directories need execute (x) permission to traverse, files need read (r) permission to serve. Minimum: 644 for files, 755 for directories. Set ownership: sudo chown -R www-data:www-data /var/www/html (adjust user for your OS). The user directive in nginx.conf specifies the worker process user: user www-data;. For log directories, Nginx needs write permission. Common issue: SELinux on CentOS may block access even with correct permissions - use restorecon or adjust SELinux context.

95% confidence
A

The index directive defines default file(s) to serve when a directory is requested. Syntax: index file1 [file2 ...];. Default value is index.html. Example: index index.html index.htm index.php;. Nginx checks files in order, serving the first one found. If none exist, it either shows directory listing (if autoindex on) or returns 403/404. Can be set in http, server, or location contexts. Using index causes an internal redirect, potentially matching a different location block. Common pattern: location / { index index.html; try_files $uri $uri/ =404; }.

95% confidence
A

The client_max_body_size directive sets the maximum allowed size of client request body. Default is 1m (1 megabyte). Requests exceeding this limit receive 413 (Request Entity Too Large) error. Example: client_max_body_size 50m; (allow 50MB uploads). Set to 0 to disable size checking. Can be placed in http, server, or location contexts. Important for file upload endpoints: location /upload { client_max_body_size 100m; }. Consider also adjusting client_body_timeout and client_body_buffer_size for large uploads.

95% confidence
A

Nginx processes requests in two phases: 1. Match listen address:port to find candidate server blocks. 2. Match Host header against server_name to select specific block. If no server_name matches, the default_server handles the request (first block if none specified as default). For multiple listen addresses, Nginx first filters by IP:port, then checks server_name. Example: with listen 80; on multiple blocks, the Host header determines which handles the request. Without Host header or no match, default server is used. The default server for each address:port pair can be explicitly set with default_server parameter.

95% confidence
A

The return directive stops processing and returns a specified code or performs a redirect. Syntax: return code [text]; or return code URL; or return URL;. Examples: return 404; (return 404 status), return 301 https://example.com$request_uri; (permanent redirect), return 200 'OK'; (return 200 with body), return 403 '{"error":"forbidden"}'; (JSON response with status). Use return for simple responses; for complex logic use rewrite or try_files. Return is more efficient than rewrite for simple redirects. In location blocks: location /old { return 301 /new; }.

95% confidence
A

The default_server parameter designates a server block as the default for an address:port pair when no server_name matches. Syntax: listen 80 default_server;. If not specified, the first server block in the configuration becomes the default. This handles requests when the Host header does not match any server_name, or when the header is missing. Only one server block can be default_server for each address:port combination. Available since Nginx version 0.8.21. Commonly used with catch-all server_name _; for unmatched requests.

95% confidence
A

The default_type directive sets the MIME type for responses when the type cannot be determined from the file extension. Default value: text/plain. Commonly changed to application/octet-stream for downloads. Example: default_type application/octet-stream;. Useful in API locations returning JSON: location /api { default_type application/json; return 200 '{"status":"ok"}'; }. Can be set in http, server, or location contexts. Nginx uses the types directive (or included mime.types file) to map extensions to MIME types; default_type is the fallback when no mapping exists.

95% confidence
A

The try_files directive checks for file existence in order and uses the first match. Syntax: try_files file1 [file2 ...] fallback;. Example: try_files $uri $uri/ /index.html;. This checks if the requested URI exists as a file, then as a directory, then serves index.html as fallback. Common patterns: try_files $uri $uri/ =404; (return 404 if not found), try_files $uri @backend; (proxy to named location if not found). The last parameter must be a fallback (URI or =code). Used with root directive for static file serving. Essential for SPA applications: try_files $uri /index.html;.

95% confidence

Logging

14 questions
A

Use the access_log directive to specify the log file path and optionally a log format. Syntax: access_log path [format [buffer=size] [gzip[=level]] [flush=time] [if=condition]];. Example: access_log /var/log/nginx/benchmark-access.log custom;. The directive can be placed in http, server, or location contexts. Lower-level settings override higher-level ones. Multiple access_log directives can log to different files. To disable logging: access_log off;. Ensure the directory exists and Nginx has write permissions (typically www-data or nginx user).

95% confidence
A

Use the log_format directive in the http block to define custom formats. Syntax: log_format format_name 'format_string';. Example: log_format custom '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';. Common variables: $time_local (timestamp), $request_method (GET/POST), $status (HTTP code), $http_user_agent (browser info), $remote_addr (client IP), $request_time (processing time). Apply the format using access_log: access_log /var/log/nginx/access.log custom;. The predefined combined format is the default.

95% confidence
A

The $status variable contains the HTTP response status code returned to the client (e.g., 200, 404, 500, 503). Used in log formats to track response outcomes. Example: log_format status_log '$remote_addr $status $request_uri';. Available since Nginx 1.3.2. Common codes: 200 (OK), 301/302 (redirects), 400 (bad request), 401 (unauthorized), 403 (forbidden), 404 (not found), 500 (server error), 502 (bad gateway), 503 (service unavailable), 504 (gateway timeout).

95% confidence
A

Use the error_log directive to specify the error log location and severity level. Syntax: error_log path [level];. Example: error_log /var/log/nginx/benchmark-error.log warn;. Severity levels from highest to lowest: emerg, alert, crit, error, warn, notice, info, debug. Setting a level logs that level and all more severe ones. Can be placed in main, http, server, or location contexts. Lower-level settings override inherited ones. Debug level requires Nginx built with --with-debug. For production, use warn or error to balance insight and disk usage.

95% confidence
A

Define a detailed log format in the http block capturing all relevant request data. Example: log_format detailed '$remote_addr - $remote_user [$time_local] "$request_method $request_uri $server_protocol" $status $body_bytes_sent "$http_referer" "$http_user_agent" rt=$request_time uct="$upstream_connect_time" uht="$upstream_header_time" urt="$upstream_response_time"';. Apply it: access_log /var/log/nginx/detailed.log detailed;. This captures: client IP, timestamp, request method, URI, status, bytes sent, user agent, and timing metrics. Quote strings with spaces (user agent, referer). Time variables are in seconds with millisecond precision.

95% confidence
A

The combined format is Nginx's default log format defined as: log_format combined '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';. It includes: client IP, remote user (usually -), timestamp, full request line, status code, bytes sent, referrer URL, and user agent. This format is compatible with common log analysis tools. Used by default when no format is specified in access_log. Can be extended: log_format extended '$remote_addr ... "$gzip_ratio"';.

95% confidence
A

The $http_user_agent variable captures the User-Agent header sent by the client, identifying the browser, bot, or client application. Example values: Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120.0.0.0. Must be quoted in log formats due to spaces: log_format custom '... "$http_user_agent"';. Any HTTP header can be logged using the format $http_headername (with dashes converted to underscores). Can be used in conditionals to block bots or serve different content: if ($http_user_agent ~* (bot|crawler)) { return 403; }.

95% confidence
A

Use the if parameter in access_log to log conditionally based on variables. Example: map $status $loggable { ~^[23] 0; default 1; } access_log /var/log/nginx/error-only.log combined if=$loggable;. This logs only non-2xx/3xx responses. Another pattern: map $request_uri $log_uri { ~*.(js|css|png|jpg)$ 0; default 1; } access_log /var/log/nginx/app.log combined if=$log_uri;. This excludes static assets from logging. The condition evaluates to empty string or '0' to skip logging. Available since Nginx 1.7.0. Useful for reducing log volume and focusing on relevant entries.

95% confidence
A

Key logging variables: $remote_addr (client IP), $remote_user (authenticated user), $time_local (timestamp in Common Log Format), $time_iso8601 (ISO 8601 timestamp), $request (full request line), $request_method (GET/POST), $request_uri (URI with query string), $status (HTTP status code), $body_bytes_sent (response body size), $http_referer (referrer URL), $http_user_agent (browser/client info), $request_time (request processing time), $upstream_response_time (upstream server response time), $connection (connection serial number), $msec (Unix timestamp with milliseconds). Full list at nginx.org/en/docs/varindex.html.

95% confidence
A

Use access_log off; within a location block to disable logging for that location. Example: location ~* .(gif|jpg|png|css|js)$ { access_log off; }. This stops logging requests for static assets. You can also use conditional logging with the if parameter to selectively disable: map $request_uri $loggable { ~*health 0; default 1; } access_log /var/log/nginx/access.log combined if=$loggable;. Complete disabling at server level: access_log off;. Disabling access logs can improve performance but makes troubleshooting difficult - use selectively for high-volume, low-value requests like health checks.

95% confidence
A

The $time_local variable displays the local time when the request was processed in Common Log Format. Format example: [21/Jan/2026:12:30:45 +0000]. It includes day/month/year:hour:minute:second and timezone offset. Used in the default combined log format. For ISO 8601 format, use $time_iso8601 instead (e.g., 2026-01-21T12:30:45+00:00). The $msec variable provides Unix timestamp with millisecond precision. Available since Nginx 1.3.12.

95% confidence
A

Use tail -f to follow log files in real-time: tail -f /var/log/nginx/access.log (access log), tail -f /var/log/nginx/error.log (error log). For both simultaneously: tail -f /var/log/nginx/*.log. Filter with grep: tail -f /var/log/nginx/access.log | grep 404. Use less +F for scrollable following. For colored output: tail -f /var/log/nginx/access.log | ccze -A. Check journald if using systemd: journalctl -u nginx -f. Common log locations: /var/log/nginx/ (Debian/Ubuntu), /var/log/nginx/ (CentOS), custom paths as configured in nginx.conf.

95% confidence
A

The $request_method variable contains the HTTP method of the request, typically GET, POST, PUT, DELETE, HEAD, OPTIONS, or PATCH. Use in log formats to track request types: log_format detailed '[$time_local] $request_method $request_uri $status';. Can also be used in conditionals: if ($request_method = POST) { ... }. The $request variable contains the full request line including method, URI, and protocol (e.g., GET /index.html HTTP/1.1). For just the method, use $request_method.

95% confidence
A

Nginx error_log supports these levels from highest to lowest severity: emerg (system is unusable), alert (action must be taken immediately), crit (critical conditions), error (error conditions, default level), warn (warning conditions), notice (normal but significant), info (informational messages), debug (debug messages, requires --with-debug build). Setting a level logs that level and all higher severity levels. Example: error_log /var/log/nginx/error.log warn; logs warn, error, crit, alert, and emerg. Use warn or error in production to minimize disk I/O while capturing actionable issues.

95% confidence

Rate Limiting

9 questions
A

Define multiple limit_req_zone directives for different limiting criteria and apply them together. Example: limit_req_zone $binary_remote_addr zone=perip:10m rate=1r/s; limit_req_zone $server_name zone=perserver:10m rate=10r/s;. Apply both in server block: limit_req zone=perip burst=5 nodelay; limit_req zone=perserver burst=10;. This limits both per-IP (1 req/s with burst of 5) and per-server (10 req/s with burst of 10). A request is rejected if it exceeds either limit. Useful for protecting both individual endpoints and overall server capacity.

95% confidence
A

Rate limiting uses two directives: limit_req_zone (defines the zone) and limit_req (applies it). Define in http block: limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;. This creates a 10MB shared memory zone named mylimit that tracks requests per IP at 10 requests/second. The $binary_remote_addr key uses binary IP representation (efficient memory usage). 10MB stores approximately 160,000 IP addresses. Rate can be r/s (requests per second) or r/m (requests per minute). The zone definition alone does not limit requests - you must apply it with limit_req.

95% confidence
A

When the shared memory zone for rate limiting (limit_req_zone) is full and Nginx needs to add a new entry, it removes the oldest entry using LRU (Least Recently Used) eviction. If freeing one entry doesn't provide enough space for the new record, Nginx returns status code 503 (Service Temporarily Unavailable) by default, or the code specified by limit_req_status. A 10MB zone stores approximately 160,000 IP addresses. Monitor zone usage and increase size if needed. Signs of exhausted zone: unexpected 503 errors, legitimate users being blocked.

95% confidence
A

Complete rate limiting configuration: In http block: limit_req_zone $binary_remote_addr zone=ratelimit:10m rate=10r/s;. In server or location block: limit_req zone=ratelimit burst=10 nodelay; limit_req_status 429;. This creates a 10MB shared memory zone named 'ratelimit' tracking requests per IP at 10 requests/second with burst allowance of 10 additional requests processed immediately (nodelay). Returns 429 when limit exceeded. Optional: add error_page 429 /429.html; for custom rate limit page. The zone stores ~160,000 IP addresses with 10MB allocation.

95% confidence
A

By default, Nginx returns status code 503 (Service Temporarily Unavailable) when a client exceeds the rate limit. This can be confusing as it suggests server issues rather than client-side rate limiting. Use limit_req_status to set a more appropriate code: limit_req_status 429; returns 429 Too Many Requests, which is the standard HTTP code for rate limiting. Example: location / { limit_req zone=mylimit; limit_req_status 429; }. You can also return 444 to close the connection without response for aggressive blocking.

95% confidence
A

The $binary_remote_addr variable contains the client IP address in binary format (4 bytes for IPv4, 16 bytes for IPv6). Used as the key in limit_req_zone because it's more memory-efficient than $remote_addr (string format). Example: limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;. A 10MB zone using $binary_remote_addr stores approximately 160,000 unique IPv4 addresses. This makes it the standard choice for per-IP rate limiting. Alternative keys: $server_name for per-virtual-host limits, $request_uri for per-URL limits.

95% confidence
A

Use limit_req in server or location blocks to apply rate limiting. Basic syntax: limit_req zone=zonename [burst=N] [nodelay|delay=N];. Example: limit_req zone=mylimit burst=10 nodelay;. The burst parameter allows temporary spikes above the rate limit by queueing excess requests. Without nodelay, excess requests are delayed to maintain the rate. With nodelay, burst requests are processed immediately but once burst is exhausted, excess requests are rejected. Example configuration: limit_req zone=one burst=5; allows 5 requests above the rate limit to be queued and processed with delay.

95% confidence
A

The burst parameter in limit_req allows temporary exceeding of the rate limit by creating a queue for excess requests. Example: limit_req zone=mylimit burst=10; with rate=10r/s allows up to 10 additional requests to be queued when the rate is exceeded. Queued requests are processed as slots become available at the defined rate. Without burst, requests exceeding the rate are immediately rejected. With burst but without nodelay, excess requests experience delay. With burst and nodelay (burst=10 nodelay), burst requests are processed immediately but new requests beyond the burst are rejected until queue space frees up.

95% confidence
A

The nodelay parameter in limit_req causes requests that exceed the rate limit to be rejected immediately rather than delayed. Without nodelay, Nginx queues excess requests and processes them with artificial delay to maintain the specified rate. With nodelay, requests within the burst allowance are processed immediately without delay, and any requests beyond the burst limit are rejected with an error. Example: limit_req zone=mylimit burst=20 nodelay; processes up to 20 burst requests instantly but rejects additional requests. This provides better user experience for legitimate traffic spikes while still limiting abuse.

95% confidence

Configuration Management

8 questions
A

Essential systemctl commands for Nginx: Start: sudo systemctl start nginx. Stop: sudo systemctl stop nginx. Restart: sudo systemctl restart nginx (full restart, drops connections). Reload: sudo systemctl reload nginx (graceful, applies config changes). Status: sudo systemctl status nginx. Enable at boot: sudo systemctl enable nginx. Disable at boot: sudo systemctl disable nginx. Native nginx commands: nginx -s stop (fast shutdown), nginx -s quit (graceful shutdown), nginx -s reload (reload config), nginx -s reopen (reopen log files). For older systems: sudo service nginx start|stop|restart|reload|status.

95% confidence
A

The main configuration file is typically at /etc/nginx/nginx.conf on Linux systems. Additional configuration locations: /etc/nginx/conf.d/*.conf (modular configs), /etc/nginx/sites-available/ and /etc/nginx/sites-enabled/ (Debian/Ubuntu virtual hosts). Default document root varies by OS: Ubuntu/Debian uses /var/www/html, CentOS/RHEL uses /usr/share/nginx/html. Log files default to /var/log/nginx/access.log and /var/log/nginx/error.log. On macOS with Homebrew: /usr/local/etc/nginx/nginx.conf or /opt/homebrew/etc/nginx/nginx.conf. Check nginx -t output for the active config path.

95% confidence
A

Multiple methods to check Nginx status: 1. systemctl status nginx - shows service status, PID, memory usage, recent logs. 2. ps aux | grep nginx - lists nginx processes (master and workers). 3. nginx -t - tests configuration and confirms nginx binary works. 4. curl -I http://localhost - tests HTTP response. 5. netstat -tlnp | grep :80 or ss -tlnp | grep :80 - shows if nginx is listening on expected ports. 6. Check /var/run/nginx.pid for process ID file. The master process runs as root, workers run as www-data/nginx user.

95% confidence
A

This is a Debian/Ubuntu convention for organizing virtual host configurations. sites-available contains all available site configuration files. sites-enabled contains symbolic links to configs that are actually active. To enable a site: ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/. To disable: rm /etc/nginx/sites-enabled/mysite. This allows keeping configurations without activating them. The main nginx.conf includes files from sites-enabled via: include /etc/nginx/sites-enabled/;. Some systems use /etc/nginx/conf.d/.conf instead, where presence of the .conf file means enabled.

95% confidence
A

The include directive imports configuration from external files for modularity. Syntax: include /path/to/file; or include /path/to/dir/.conf;. Common uses: include /etc/nginx/conf.d/.conf; (all configs in directory), include /etc/nginx/sites-enabled/*; (enabled sites), include /etc/nginx/snippets/ssl.conf; (reusable snippets). Place include inside appropriate context (http, server, location). Included files must have valid Nginx syntax. Benefits: modular configuration, reusable snippets, easier maintenance. Example: create /etc/nginx/snippets/gzip.conf with compression settings and include it in multiple server blocks.

95% confidence
A

Remove the symbolic link from sites-enabled using unlink or rm. Commands: sudo unlink /etc/nginx/sites-enabled/default or sudo rm /etc/nginx/sites-enabled/default. This disables the default site without deleting the original file in sites-available. To completely remove: also delete /etc/nginx/sites-available/default. After removal, test configuration (nginx -t) and reload Nginx (systemctl reload nginx). On some systems, the default config may be in /etc/nginx/conf.d/default.conf - remove this file if it exists. Nginx does not have an a2dissite command like Apache; symlinks must be managed manually.

95% confidence
A

Use systemctl reload nginx to apply configuration changes without dropping connections. This sends SIGHUP signal, causing Nginx to: load new configuration, start new worker processes with new config, gracefully shut down old workers after completing current requests. Command: sudo systemctl reload nginx. Alternative using nginx directly: sudo nginx -s reload. Always run nginx -t first to validate syntax. Reload is preferred over restart for configuration changes. Use restart (sudo systemctl restart nginx) only for significant changes like changing ports or interfaces.

95% confidence
A

Use nginx -t to test configuration syntax without applying changes. Run: sudo nginx -t. Successful output: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok; nginx: configuration file /etc/nginx/nginx.conf test is successful. Failed tests show the specific error and line number. Always run before reload or restart to prevent downtime. Use nginx -c /path/to/nginx.conf -t to test a specific file. Add -q flag to suppress non-error messages. Note: nginx -t returns errors one at a time, so multiple errors require repeated testing after each fix.

95% confidence

Performance

5 questions
A

worker_processes defines how many worker processes Nginx spawns, typically set to the number of CPU cores: worker_processes auto; (auto-detect) or worker_processes 4; (explicit). worker_connections sets max simultaneous connections per worker, found in the events block: events { worker_connections 1024; }. Total concurrent connections = worker_processes * worker_connections. With 4 workers and 1024 connections each, Nginx handles 4096 simultaneous connections. Also set worker_rlimit_nofile to at least 2x worker_connections for file descriptor limits. Example: worker_rlimit_nofile 4096; events { worker_connections 2048; }.

95% confidence
A

Use keepalive_timeout and keepalive_requests directives to manage persistent connections. keepalive_timeout sets how long to keep idle connections open (default 75s): keepalive_timeout 65;. The first parameter is client timeout, optional second is Keep-Alive header value: keepalive_timeout 65 60;. keepalive_requests limits requests per connection (default 1000): keepalive_requests 100;. Higher values reduce connection overhead but use more memory. For high-traffic sites: keepalive_timeout 65; keepalive_requests 10000;. Set to 0 to disable keepalive. Also configure upstream keepalive for proxied backends.

95% confidence
A

The events block configures connection processing parameters at the core level. Location: top-level, outside http block. Key directives: worker_connections (max connections per worker), multi_accept on|off (accept multiple connections at once), use epoll|kqueue|select (connection processing method). Example: events { worker_connections 2048; multi_accept on; use epoll; }. Linux should use epoll (efficient for many connections), BSD/macOS use kqueue. Default worker_connections is 512-768 depending on installation. The events block is required in nginx.conf but can be empty to use defaults.

95% confidence
A

These directives optimize static file delivery: sendfile on; enables kernel-level file transfer, bypassing user-space buffers. Dramatically improves throughput (Netflix reported 6Gbps to 30Gbps improvement). tcp_nopush on; sends HTTP headers in one packet when using sendfile, reducing small packet overhead. tcp_nodelay on; disables Nagle's algorithm, sending small packets immediately without waiting to batch them. Use all three together: Nginx ensures full packets before sending, then removes tcp_nopush for the last packet so tcp_nodelay sends it immediately. Typical config: sendfile on; tcp_nopush on; tcp_nodelay on;.

95% confidence
A

Enable gzip in the http block with gzip on; and specify types to compress. Basic configuration: gzip on; gzip_types text/plain text/css application/json application/javascript text/xml application/xml; gzip_min_length 1000;. Key directives: gzip_comp_level 6; (compression level 1-9, 4-6 is balanced), gzip_vary on; (adds Vary: Accept-Encoding header), gzip_proxied any; (compress proxied responses), gzip_disable "msie6"; (disable for old IE). Default only compresses text/html. Verify with: curl -H "Accept-Encoding: gzip" -I http://yoursite.com and look for Content-Encoding: gzip in response.

95% confidence

Error Pages

3 questions
A

Specify multiple error codes in a single error_page directive. Syntax: error_page code1 code2 ... /error.html;. Example: error_page 404 500 502 503 504 /error.html; location = /error.html { root /var/www/errors; internal; }. You can also redirect to different pages per code: error_page 404 /404.html; error_page 500 502 503 504 /50x.html;. For API responses, return JSON: error_page 404 = @notfound; location @notfound { default_type application/json; return 404 '{"error": "Not Found"}'; }.

95% confidence
A

The internal directive specifies that a location can only be used for internal requests, not accessed directly by clients. Essential for error pages to prevent direct access: location = /error.html { root /var/www/html; internal; }. Without internal, users could directly request /error.html. Internal locations are used with error_page, try_files, and rewrite directives. Attempting to access an internal location directly returns 404. Commonly used pattern: error_page 404 /404.html; location = /404.html { internal; root /var/www/errors; }.

95% confidence
A

Use the error_page directive to specify a custom page for 404 errors. Basic configuration: error_page 404 /custom_404.html; location = /custom_404.html { root /usr/share/nginx/html; internal; }. The internal directive ensures the error page cannot be accessed directly by clients. Place the error_page directive in server or location blocks. The error page file must exist at the specified location. For multiple error codes: error_page 404 500 502 503 /error.html;. You can also redirect to a named location: error_page 404 = @notfound;.

95% confidence