NginX
Pronounced "Engine X," NginX is a web server which can also be used as a reverse proxy, load balancer, mail proxy, and HTTP cache. It was created in 2005 by Igor Sysoev, and has been known for its stability, rich feature set, simple configuration, and low resource consumption. NginX and has become the most popular web server according to Netcraft -- edging out Apache by a slim margin.
NginX was written with an explicit goal of outperforming the Apache web server. For serving static files, NginX uses much less memory than Apache, and can handle roughly four times as many requests per second. However, this performance boost comes at a cost of decreased flexibility, such as the ability to override system-wide access settings on a per-file basis (Apache accomplishes this with an .htaccess
file, while Nginx has no such feature built in).
Prerequisites
None
Required Packages
yay -Syu --needed apache-tools ca-certificates certbot-nginx nginx
Configuration
Configuration is done through the /etc/nginx/ngninx.conf
file. By default, all configuration is done through this file. It's advisable to take advantage of the ability to include other files to make multi-site administration easier.
Core Configuration
The following is a good general start for the main nginx.conf
file.
First, edit /etc/nginx/nginx.conf
:
/etc/nginx/nginx.conf
user http; worker_processes 1; # one(1) worker or equal the number of _real_ cpu cores. 4=4 core cpu worker_priority 15; # renice workers to reduce priority compared to system processes for machine health. Worst case nginx will get ~25% system resources at nice=15 worker_rlimit_nofile 1024; # maximum number of open files worker_cpu_affinity auto; events { multi_accept on; accept_mutex on; # serially accept() connections and pass to workers, efficient if workers gt 1 accept_mutex_delay 500ms; # worker process will accept mutex after this delay if not assigned. (default 500ms) worker_connections 1024; # number of parallel or concurrent connections per worker_processes } http { charset utf-8; aio on; # asynchronous file I/O, fast with ZFS, make sure sendfile=off sendfile off; # on for decent direct disk IO, off for VMs tcp_nopush off; # turning on requires sendfile=on tcp_nodelay on; # Nagle buffering algorithm, used for keepalive only server_tokens off; # version number in error pages log_not_found off; types_hash_max_size 4096; # MIME include mime.types; default_type application/octet-stream; # logging access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log warn; # SSL ssl_session_timeout 1440m; ssl_session_cache shared:le_nginx_SSL:10m; ssl_session_tickets off; ssl_prefer_server_ciphers off; # Diffie-Hellman parameter for DHE ciphersuites ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # Mozilla Intermediate configuration ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; resolver 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 208.67.222.222 208.67.220.220 valid=60s; resolver_timeout 2s; # Size Limits #client_body_buffer_size 8k; #client_header_buffer_size 1k; client_max_body_size 16M; #large_client_header_buffers 4 4k/8k; ## From StackOverflow: for "upstream sent too big header while reading response header from upstream" fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; # Timeouts, do not keep connections open longer then necessary to reduce # resource usage and deny Slowloris type attacks. client_body_timeout 5s; # maximum time between packets the client can pause when sending nginx any data client_header_timeout 5s; # maximum time the client has to send the entire header to nginx keepalive_timeout 75s; # timeout which a single keep-alive client connection will stay open send_timeout 15s; # maximum time between packets nginx is allowed to pause when sending the client data ## General Options gzip off; # disable on the fly gzip compression due to higher latency, only use gzip_static #gzip_http_version 1.0; # serve gzipped content to all clients including HTTP/1.0 gzip_static on; # precompress content (gzip -9) with an external script #gzip_vary on; # send response header "Vary: Accept-Encoding" gzip_proxied any; # allows compressed responses for any request even from proxies ignore_invalid_headers on; keepalive_requests 50; # number of requests per connection, does not affect SPDY keepalive_disable none; # allow all browsers to use keepalive connections max_ranges 1; # allow a single range header for resumed downloads and to stop large range header DoS attacks msie_padding off; open_file_cache max=1000 inactive=2h; open_file_cache_errors on; open_file_cache_min_uses 1; open_file_cache_valid 1h; output_buffers 1 512; postpone_output 1440; # postpone sends to match our machine's MSS read_ahead 512K; # kernel read head set to the output_buffers recursive_error_pages on; reset_timedout_connection on; # reset timed out connections freeing ram server_name_in_redirect off; # if off, nginx will use the requested Host header source_charset utf-8; # same value as "charset" ## Request limits limit_req_zone $binary_remote_addr zone=gulag:16m rate=600r/m; ## Log Format log_format main '$remote_addr $host $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $ssl_cipher $request_time'; # load configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*.conf; }
Make the sites-available
and sites-enabled
directories.
sudo mkdir /etc/nginx/sites-{available,enabled}
Standard Web Site
Create a Web Root
sudo mkdir /srv/http/<domain>
If you don't have anything to put into the directory for the site, create an index.html
file.
/srv/http/<domain>/index.php
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Server Configuration Confirmation</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> </head> <body style="font-family:sans-serif;"> <h1>Web server is properly configured!</h1> </body> </html>
Configuration
Make a site available by creating /etc/nginx/sites-available/<domain>.conf
:
#
s after setting up Let's Encrypt./etc/nginx/sites-available/<domain>.conf
server { listen 80; listen [::]:80; server_name <domain>; # return 301 https://$host$request_uri; #} # #server { # # listen 443 ssl http2; # listen [::]:443 ssl http2; # server_name <domain>; # # ssl_certificate /etc/letsencrypt/live/<domain>/fullchain.pem; # ssl_certificate_key /etc/letsencrypt/live/<domain>/privkey.pem; # ssl_trusted_certificate /etc/letsencrypt/live/<domain>/chain.pem; # # fastcgi_param HTTPS on; # # add_header Strict-Transport-Security max-age=15768000; root /srv/http/$host; index index.html index.php; add_header Cache-Control "public"; add_header X-Frame-Options "DENY"; access_log /var/log/nginx/access.log main buffer=32k; error_log /var/log/nginx/error.log error; limit_req zone=gulag burst=200 nodelay; # ACME challenge location ^~ /.well-known { allow all; alias /var/lib/letsencrypt/$host/.well-known; default_type "text/plain"; try_files $uri =404; } location / { try_files $uri $uri/ /index.php?$query_string; } location ~ /(data|conf|bin|inc)/ { deny all; } location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ { access_log off; expires max; } location ~ \.php$ { try_files $uri = 404; fastcgi_pass unix:/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi.conf; } location ~ /\.ht { deny all; } }
Reverse Proxy
A reverse proxy is primarily used when a stand-alone application provides its own web interface and you want to route access to it through standard HTTP(S).
#
s after setting up Let's Encrypt./etc/nginx/sites-available/<domain>.conf
server { listen 80; listen [::]:80; server_name <domain>; # return 301 https://$host$request_uri; # #} # #server { # # listen 443 ssl http2; # listen [::]:443 ssl http2; # server_name <domain>; # # ssl_certificate /etc/letsencrypt/live/<domain>/fullchain.pem; # ssl_certificate_key /etc/letsencrypt/live/<domain>/privkey.pem; # ssl_trusted_certificate /etc/letsencrypt/live/<domain>/chain.pem; # # add_header Strict-Transport-Security max-age=15768000; root /srv/http/$host; index index.html index.php; add_header Cache-Control "public"; add_header X-Frame-Options "DENY"; access_log /var/log/nginx/access.log main buffer=32k; error_log /var/log/nginx/error.log error; limit_req zone=gulag burst=200 nodelay; # ACME challenge location ^~ /.well-known { allow all; alias /var/lib/letsencrypt/$host/.well-known; default_type "text/plain"; try_files $uri =404; } location / { proxy_pass http://localhost:11334; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; auth_basic "Username and Password Required"; auth_basic_user_file /etc/nginx/sites-enabled/htpasswd; } }
Basic Authorization
Some proxied sites have inadequate or no authentication method of their own. Sometimes you may wish to provide an additional "bump in the road" for site access. Nginx can use an htpasswd
file to control access.
sudo htpasswd -c /etc/nginx/sites-enabled/htpasswd <user>
You will then be prompted for a password, and to confirm the password. When finished, htpasswd
will create the file.
/etc/nginx/sites-enabled/htpasswd
<username>:<password-hash>
-c
parameter.To use Basic Authentication with nginx, put these lines in the appropriate server
or location
stanzas:
auth_basic "Restricted By Username and Password"; auth_basic_user_file /etc/nginx/sites-enabled/htpasswd;
SSL Certificate Authorization
Server Configuration
/etc/ssl/openssl.cnf
[ CA_default ] dir = /srv/ssl # Where everything is kept certificate = $dir/ca.cert # The CA certificate private_key = $dir/private/ca.key # The private key RNDFILE = $dir/private/.rand # The private random number file default_md = sha1 # use public key default MD
sudo mkdir -p /srv/ssl/{certs,crl,private} sudo mkdir /srv/ssl/certs/users sudo touch /srv/ssl/index.txt echo "01" | sudo tee /srv/ssl/crlnumber
Creating the Server Certificate
sudo openssl genrsa -des3 -out /srv/ssl/private/ca.key 4096 sudo openssl req -new -x509 -days 1095 -key /srv/ssl/private/ca.key -out /srv/ssl/certs/ca.crt sudo openssl ca -name CA_default -gencrl -keyfile /srv/ssl/private/ca.key -cert /srv/ssl/certs/ca.crt -out /srv/ssl/private/ca.crl -crldays 1095
Creating User Certificates
export USRCRTDR="/srv/ssl/certs/users" export USERNAME="username" sudo openssl genrsa -des3 -out ${USRCRTDR}/${USERNAME}.key 1024 sudo openssl req -new -key ${USRCRTDR}/${USERNAME}.key -out ${USRCRTDR}/${USERNAME}.csr sudo openssl x509 -req -days 1095 -in ${USRCRTDR}/${USERNAME}.csr -CA /srv/ssl/certs/ca.crt -CAkey /srv/ssl/private/ca.key -CAserial /srv/ssl/serial -CAcreateserial -out ${USRCRTDR}/${USERNAME}.crt sudo openssl pkcs12 -export -clcerts -in ${USRCRTDR}/${USERNAME}.crt -inkey ${USRCRTDR}/${USERNAME}.key -out ${USRCRTDR}/${USERNAME}.p12
Revoking User Certificates
sudo openssl ca -name CA_Default -revoke /srv/ssl/certs/users/USERNAME.crt -keyfile /srv/ssl/private/ca.key -cert /srv/ssl/certs/ca.crt sudo openssl ca -name CA_Default -gencrl -keyfile /srv/ssl/private/ca.key -cert /srv/ssl/certs/ca.crt -out /srv/ssl/private/ca.crl -crldays 1095
Nginx Configuration
If you want to use certificate authorization in nginx, add these lines to the appropriate server
stanzas:
ssl_client_certificate /srv/ssl/certs/ca.crt; ssl_crl /srv/ssl/private/ca.crl; ssl_verify_client on;
Finalization
Enable your sites by using symlinks in sites-enabled
that point to the conf files in sites-available
sudo ln -s ../sites-available/<domain>.conf /etc/nginx/sites-enabled/<domain>.conf
Then make sure the nginx
service is enabled and running:
sudo systemctl enable --now nginx