Set Up Nginx Reverse Proxy SSL Termination on Ubuntu VPS

By Raman Kumar

Share:

Updated on May 16, 2026

Set Up Nginx Reverse Proxy SSL Termination on Ubuntu VPS

Understanding Nginx Reverse Proxy SSL Termination

SSL termination with Nginx reverse proxy offloads encryption processing from your backend servers. It maintains secure connections from clients while improving performance.

Your VPS handles the SSL handshake once at the edge. Backend servers receive decrypted traffic internally. This reduces CPU overhead on application servers and centralizes certificate management.

This configuration works well for hosting multiple applications behind a single public IP. Hostperl VPS hosting provides the dedicated resources needed to handle SSL termination efficiently.

Prerequisites and Environment Setup

Before configuring Nginx reverse proxy SSL termination, ensure your Ubuntu VPS meets these requirements:

  • Ubuntu 20.04 LTS or newer with root access
  • Domain name pointed to your VPS IP address
  • Backend application running on internal port (e.g., 3000, 8080)
  • At least 2GB RAM for SSL processing and proxy operations

Update your system packages first:

sudo apt update && sudo apt upgrade -y
sudo apt install nginx certbot python3-certbot-nginx -y

Verify Nginx installation and start the service:

sudo systemctl start nginx
sudo systemctl enable nginx
sudo nginx -v

Configure Basic Reverse Proxy Setup

Create a new Nginx configuration file for your domain. Replace example.com with your actual domain name:

sudo nano /etc/nginx/sites-available/example.com

Add the basic reverse proxy configuration:

server {
    listen 80;
    server_name example.com www.example.com;
    
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_redirect off;
    }
}

Enable the site configuration:

sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

Test the basic proxy by accessing your domain. You should see your backend application serving through Nginx.

Install SSL Certificate with Let's Encrypt

Obtain an SSL certificate using Certbot. This automatically configures Nginx for HTTPS:

sudo certbot --nginx -d example.com -d www.example.com

Follow the interactive prompts. Choose option 2 to redirect all HTTP traffic to HTTPS.

Certbot automatically modifies your Nginx configuration to include SSL settings.

Verify the certificate installation:

sudo certbot certificates

Check the updated configuration:

sudo cat /etc/nginx/sites-available/example.com

The automatic configuration handles basic SSL. You'll want to enhance it with production-ready settings.

Optimize SSL Configuration

Edit your site configuration to add advanced SSL settings:

sudo nano /etc/nginx/sites-available/example.com

Replace the SSL server block with this optimized configuration:

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;
    
    # SSL Configuration
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    
    # SSL Security Settings
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_ecdh_curve secp384r1;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    
    # Security Headers
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header Referrer-Policy "no-referrer-when-downgrade";
    
    # Proxy Configuration
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;
        
        # Proxy timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
        
        # Buffer settings
        proxy_buffering on;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
}

This configuration enables HTTP/2 and optimizes SSL settings. It includes essential security headers.

The proxy settings ensure your backend application receives correct client information.

Handle Multiple Backend Services

For hosting multiple applications, configure location-based routing:

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;
    
    # SSL settings (same as above)
    
    # Main application
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
    }
    
    # API service
    location /api/ {
        proxy_pass http://127.0.0.1:8080/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
    }
    
    # Static files service
    location /static/ {
        proxy_pass http://127.0.0.1:9000/;
        proxy_set_header Host $host;
        proxy_cache_valid 200 1d;
        expires 1d;
        add_header Cache-Control "public, immutable";
    }
}

Each location block can point to different backend services. SSL handling remains centralized.

This approach works well when migrating from shared hosting to VPS with multiple applications.

Configure Upstream Load Balancing

For high-availability setups, configure upstream servers with load balancing behind your SSL layer:

upstream backend_app {
    least_conn;
    server 127.0.0.1:3001 weight=3 max_fails=2 fail_timeout=30s;
    server 127.0.0.1:3002 weight=3 max_fails=2 fail_timeout=30s;
    server 127.0.0.1:3003 weight=2 backup;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;
    
    # SSL configuration (same as above)
    
    location / {
        proxy_pass http://backend_app;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        
        # Health check settings
        proxy_next_upstream error timeout http_500 http_502 http_503;
        proxy_next_upstream_tries 3;
        proxy_next_upstream_timeout 10s;
    }
}

This configuration distributes traffic across multiple backend instances. The least_conn method ensures even load distribution based on active connections.

Monitor SSL Performance and Troubleshooting

Monitor your SSL performance with these diagnostic commands:

# Check SSL certificate status
openssl s_client -connect example.com:443 -servername example.com

# Test SSL configuration
sudo nginx -t

# Monitor SSL handshake performance
time openssl s_client -connect example.com:443 < /dev/null

# Check proxy performance
curl -I -H "Host: example.com" https://example.com/

Common troubleshooting steps include:

  • Verify backend services are running on specified ports
  • Check firewall rules allow internal proxy connections
  • Monitor SSL certificate expiration dates
  • Review Nginx error logs for proxy failures

Enable detailed logging for troubleshooting:

error_log /var/log/nginx/example.com_error.log debug;
access_log /var/log/nginx/example.com_access.log combined;

Performance monitoring helps identify bottlenecks in your setup. You can reference our Nginx SSL security headers guide for additional security enhancements.

Ready to deploy high-performance SSL configuration for your applications? Hostperl VPS hosting provides the dedicated resources and flexibility needed for efficient reverse proxy configurations. Our New Zealand-based infrastructure ensures low latency for APAC users while maintaining enterprise-grade performance.

Frequently Asked Questions

What's the performance impact of SSL on VPS resources?

SSL processing typically uses 10-15% additional CPU and 50-100MB extra RAM per concurrent connection. A 2GB VPS can handle 200-300 simultaneous SSL connections comfortably while serving backend applications.

Can I use SSL with WebSocket connections?

Yes, add specific WebSocket handling to your proxy configuration with proxy_http_version 1.1, proxy_set_header Upgrade $http_upgrade, and proxy_set_header Connection "upgrade" directives.

How do I handle SSL certificate renewal with reverse proxy?

Certbot automatically renews Let's Encrypt certificates. Test renewal with sudo certbot renew --dry-run. The renewal process doesn't require Nginx configuration changes since certificate paths remain the same.

What happens if my backend server goes down?

Nginx returns a 502 Bad Gateway error by default. Configure upstream blocks with backup servers or custom error pages using proxy_next_upstream directives to handle backend failures gracefully.

Should I use SSL for database connections?

No, database connections should use end-to-end encryption rather than SSL handling at the proxy layer. Reserve this technique for HTTP/HTTPS traffic to web applications and APIs only.