NGINX as a reverse proxy and its advantages

Before going into the details about the reverse proxy, let’s understand the proxy first.

What is a proxy?

It's a server that sits in front of a client machine(s). Its main purpose is to protect the client's identity for outgoing requests to the internet and to give controlled access to the content. Basically it acts as a middle man for those clients.

What is a reverse proxy?

Similarly, reverse proxy is the same as a proxy but it does the opposite of proxy. It sits in front of the server(s) and intercept the incoming requests.

Let's go ahead and configure the NGINX as a reverse proxy for the server. In our case we are configuring it for the server running in Node.js on port 3000.
This is how our NGINX configuration would look like.

The default nginx.conf  file.

user www-data;
worker_processes auto;
pid /run/;
include /etc/nginx/modules-enabled/*.conf;

events {
	worker_connections 1024;
	# multi_accept on;

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;

    sendfile                    on;
    ignore_invalid_headers      on;
    keepalive_timeout           65;
    gzip                        on;
    gzip_static                 on;
    gzip_types                  text/plain application/json;
    gzip_min_length             500;
    gzip_http_version           1.1;
    gzip_proxied                expired no-cache no-store private auth;

    proxy_intercept_errors          on;
    server_names_hash_bucket_size   128;
    client_max_body_size            32m;

    log_format main '"[$time_iso8601]": remote_addr="$remote_addr" - upstream_addr="$upstream_addr" - upstream_response_time="$upstream_response_time" - status="$status" - body_bytes_sent="$body_bytes_sent" - request="$request" - http_referer="$http_referer" - http_user_agent="$http_user_agent" - server_name="$server_name" - http_x_forwarded_for="$http_x_forwarded_for" - upstream_status="$upstream_status" - proxy_add_x_forwarded_for="$proxy_add_x_forwarded_for" - http_via="$http_via" - request_time="$request_time" - connection="$connection" - connection_requests="$connection_requests" - host="$host"';
    access_log /var/log/nginx/access.log main;
    error_log /var/log/nginx/error.log;
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;

Our custom configuration file, lets call it app.conf

real_ip_header x-forwarded-for;
real_ip_recursive on;

limit_req_zone $remote_addr zone=web_reqs_1:30m rate=5r/s;

server {
    listen                  8080;
    server_name   ;
    root                    /usr/share/nginx/html;
    client_max_body_size    32m;
    gzip_vary               on;
    charset                 utf-8;
    limit_req zone=web_reqs_1 burst=10 delay=5;
    limit_req_status 429;
    limit_conn_status 429;

    location  /_next {
		proxy_pass    ;
        proxy_http_version      1.1;

        proxy_hide_header  Cache-Control;
        add_header Cache-Control 'public, max-age=31536000, immutable';

        proxy_set_header        Upgrade $http_upgrade;
        proxy_set_header        Connection 'upgrade';
        proxy_set_header        Host $host;

    location / {
        auth_basic              "Basic Auth";
        auth_basic_user_file    /etc/nginx/conf.d/.htpasswd;

        proxy_pass    ;
        proxy_http_version      1.1;
        proxy_set_header        Upgrade $http_upgrade;
        proxy_set_header        Connection 'upgrade';
        proxy_set_header        Host $host;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-Proto https;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_cache_bypass      $http_upgrade;
        add_header              X-Frame-Options   SAMEORIGIN;
        add_header              X-Content-Type-Options nosniff;
        add_header              Access-Control-Allow-Origin;
        add_header              Cross-Origin-Opener-Policy same-origin;
        add_header              Cross-Origin-Resource-Policy same-origin;
        add_header              Referrer-Policy strict-origin-when-cross-origin;
        add_header              X-Dns-Prefetch-Control on;
        add_header              X-Xss-Protection '1; mode=block';
        add_header              Strict-Transport-Security 'max-age=15552000; includeSubDomains';
        server_tokens off;
        more_clear_headers  'ETag' 'Server' 'X-Powered-By' 'X-Runtime' 'X-Nextjs-Cache';

    error_page  404     /404.html;
    location = /404.html {
        root /usr/share/nginx/html;
    error_page  500 502 503     /500.html;
    location = /500.html {
        root /usr/share/nginx/html;
    error_page  429     /429.html;
    location = /429.html {
        root /usr/share/nginx/html;
Make sure to install the nginx-extras from the respective package manager's like yum, apt-get etc

In the default nginx.conf, we added following lines

  • include /etc/nginx/modules-enabled/*.conf; - which will load all the enabled modules.
  • Added custom log_format

In app.conf configuration file

  • The first 3 lines, set_real_ip_from; real_ip_header x-forwarded-for; real_ip_recursive on; - this will set the client real IP from the x-forwarded-for header. (This is required if its placed behind the AWS ALB, use appropriate VPC CIDR instead of
  • limit_req_zone $remote_addr zone=web_reqs_1:30m rate=5r/s; - to apply rate limit in general or to certain url path. This is applied only when you define limit_req zone=web_reqs_1 burst=10 delay=5; inside the server block or location block. In this case, it will allow only 5 requests per second per client IP.
  • add_header in location block - add custom header to the outgoing response.
  • server_tokens off; - Hide NGINX build name and its version from the response header.
  • more_clear_headers  'ETag' 'Server' 'X-Powered-By' 'X-Runtime' 'X-Nextjs-Cache'; - Removes mentioned headers from the response.

What are the benefits of reverse proxy?

  • It can act as a load balancer. In this scenario, its main purpose is to direct the traffic to the appropriate app server by distributing the load evenly among them.
  • It can be used for caching static assets like JS/CSS or images.
  • It can protect server identities by hiding certain response headers.
  • It can be used to protect app servers from external attacks like DDoS, either by rate limiting or completely blocking certain IP addresses.
  • It can be used to set custom HTTP headers for the outgoing response.
  • It can be used to make site or part of the site private by enabling the basic authentication.
  • It can also be used to get the location information (like City, State, Country etc) of the client by their IP address.

Overall it would be good to have NGINX before the app server to intercept the requests, even if you have a proper load balancing solutions like AWS ALB.

Balakrishna Hebbar

Balakrishna Hebbar

An experienced professional having gained extensive knowledge in the diverse domain of Software Development. | #design #development #devops
Remote Work