Nginx Worker Processes and Connections: Tuning for Your Server

Nginx's default configuration is conservative. The defaults for worker processes and connections will serve a low-traffic site without complaint, but they leave performance on the table for anything larger. Understanding what these settings do and how they interact lets you tune Nginx to use the hardware you have paid for.

worker_processes: How Many Workers to Run

worker_processes auto;

Nginx forks a master process and one or more worker processes. The master process reads configuration, manages sockets, and controls workers. The workers handle all actual connections. Because workers are single-threaded, each worker can only do one thing at a time from a CPU perspective — but thanks to the event-driven model, each worker can handle thousands of concurrent connections.

The auto value sets worker_processes to the number of available CPU cores. This is almost always correct: one worker per core minimises context switching while fully utilizing all cores. On a 4-core server, auto gives you 4 workers.

Verify what auto will resolve to:

nproc
# 4

Setting worker_processes higher than the CPU count does not help for CPU-bound work and adds context-switching overhead. The exception is when workers are I/O-bound with very slow upstream systems and worker_connections is being maxed out — but this is rare and there are better ways to handle it.

If your server runs other CPU-intensive services (PHP-FPM, MySQL), you might set workers to nproc - 1 to leave a core for other processes. On a dedicated Nginx proxy or CDN edge node, auto is right.

worker_connections: Connections Per Worker

events {
    worker_connections 1024;
}

This sets the maximum number of simultaneous connections a single worker can handle. The default of 1024 is appropriate for a small server but too low for anything handling significant traffic.

The total maximum connections Nginx can handle is:

max_connections = worker_processes × worker_connections

With 4 workers and 1024 connections each, the maximum is 4096 simultaneous connections. For a site handling 500 concurrent users, each potentially with 2–4 open connections (assets, keep-alive), this can be tight.

A practical starting point for a mid-sized server:

events {
    worker_connections 4096;
}

This gives 4 × 4096 = 16,384 possible connections. Each connection uses approximately 1–2 KB of memory, so 16,384 connections is about 16–32 MB of memory just for connection state — negligible on modern hardware.

multi_accept and use epoll

events {
    worker_connections 4096;
    multi_accept on;
    use epoll;
}

multi_accept on tells each worker to accept all new connections at once rather than one at a time. Without it, when multiple connections arrive simultaneously, a worker picks up one, processes its events, then picks up the next. With multi_accept on, the worker accepts all pending connections in a single accept loop iteration. This reduces latency under connection bursts.

use epoll selects the epoll I/O multiplexing method, which is the most efficient on Linux. Nginx auto-detects and uses epoll on Linux by default, so setting it explicitly is mostly documentation — but it is worth being explicit in production configs. On FreeBSD, use use kqueue instead.

worker_rlimit_nofile: File Descriptor Limits

Each connection uses a file descriptor. The OS limits how many file descriptors a process can have open at once. The default system limit is typically 1024, which will immediately cap your connections well below worker_connections.

worker_rlimit_nofile 65536;

This sets the maximum number of open files (file descriptors) for each worker process. Set it to at least twice your worker_connections value, because a proxied connection uses two file descriptors (one for the client, one for the upstream).

You also need to raise the OS-level limit. Check the current limits:

ulimit -n
# 1024  (often the default)

# Check actual limits of a running Nginx worker
NGINX_PID=$(cat /run/nginx.pid)
cat /proc/$NGINX_PID/limits | grep "open files"
# Max open files          65536

To make the OS limit persistent, edit /etc/security/limits.conf:

nginx   soft   nofile   65536
nginx   hard   nofile   65536

Or on systemd-managed systems, set in the Nginx service override:

systemctl edit nginx
[Service]
LimitNOFILE=65536

Verify after reloading Nginx:

nginx -s reload
NGINX_PID=$(cat /run/nginx.pid)
WORKER_PID=$(pgrep -P $NGINX_PID | head -1)
cat /proc/$WORKER_PID/limits | grep "open files"

keepalive_timeout and keepalive_requests

http {
    keepalive_timeout  65;
    keepalive_requests 1000;
}

keepalive_timeout sets how long an idle keep-alive connection stays open. 65 seconds is the Nginx default. For a busy API or CMS serving many assets, keeping connections alive reduces the overhead of TCP handshakes.

However, idle keep-alive connections consume a file descriptor and a small amount of memory on both client and server. If you have thousands of simultaneous users with long-lived connections, this adds up. For a Drupal site with mainly page views and cached assets, 30–65 seconds is sensible.

keepalive_requests limits how many requests a single keep-alive connection can serve. The default of 1000 in Nginx 1.19.10+ is reasonable. Once a connection serves this many requests it is closed gracefully and the client opens a new one. This prevents a single persistent connection from monopolising a worker indefinitely.

Upstream Keepalives with PHP-FPM

The most impactful keepalive setting for a PHP/Drupal site is keepalives on the upstream connection to PHP-FPM. By default, Nginx opens a new connection to PHP-FPM for every PHP request, which wastes time on TCP or Unix socket setup.

upstream php_fpm {
    server unix:/run/php/php8.3-fpm.sock;
    keepalive 32;
}

server {
    # ...
    location ~ \.php$ {
        fastcgi_pass php_fpm;
        fastcgi_keep_conn on;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

keepalive 32 tells Nginx to maintain a pool of 32 idle connections to PHP-FPM. These connections are reused across requests. The fastcgi_keep_conn on directive enables keepalive on the FastCGI protocol level.

For a Unix socket (same-host PHP-FPM), the benefit is smaller than for a TCP connection to a remote PHP-FPM, but it still avoids the overhead of socket teardown and re-establishment on every request.

Practical Values for a 4-Core VPS with 4 GB RAM

worker_processes      auto;        # 4 workers on a 4-core server
worker_rlimit_nofile  65536;

events {
    worker_connections  4096;
    multi_accept        on;
    use                 epoll;
}

http {
    keepalive_timeout   30;
    keepalive_requests  500;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;

    upstream php_fpm {
        server unix:/run/php/php8.3-fpm.sock;
        keepalive 32;
    }

    server {
        # ... virtual host config
        location ~ \.php$ {
            fastcgi_pass         php_fpm;
            fastcgi_keep_conn    on;
            fastcgi_read_timeout 60;
            include              fastcgi_params;
            fastcgi_param        SCRIPT_FILENAME $document_root$fastcgi_script_name;
        }
    }
}

With these settings, the maximum is 4 × 4096 = 16,384 simultaneous connections. Each worker can hold up to 65,536 open file descriptors. The 30-second keepalive timeout balances connection reuse against resource usage. The upstream keepalive pool of 32 connections avoids FastCGI reconnection overhead.

Memory Estimate

ComponentPer UnitUnitsTotal
Nginx worker process~5 MB4~20 MB
Active connections (16k)~2 KB16,384~32 MB
PHP-FPM processes (25)~50 MB25~1,250 MB
MySQL/MariaDB~512 MB1~512 MB
Redis~128 MB1~128 MB
OS overhead~256 MB1~256 MB

Total: approximately 2.2 GB for a typical Drupal stack with 25 PHP-FPM workers, leaving ~1.8 GB headroom on a 4 GB VPS. Nginx itself is not the memory concern — PHP-FPM process count dominates. See the separate article on PHP-FPM pool tuning for that side of the equation.

Verifying Your Settings Are Active

# Check for config syntax errors
nginx -t

# Reload gracefully (zero downtime)
nginx -s reload

# Check active worker count
ps aux | grep 'nginx: worker'

# Check open file descriptor limit of a worker
WORKER_PID=$(pgrep -f 'nginx: worker' | head -1)
cat /proc/$WORKER_PID/limits | grep "open files"

# Check current connection count
ss -s | grep estab

The nginx -t command will catch configuration errors before you reload. Always run it before nginx -s reload in production.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Please share this article on your favorite website or platform.