In this tutorial, we're implementing basic load balancing with Nginx on AlmaLinux 9 server.
Introduction
Implementing Load Balancing with Nginx on AlmaLinux is a powerful way to distribute client requests evenly across multiple backend servers, improve our application’s availability, and scale seamlessly as traffic grows. In this tutorial, we’ll walk through every step—from preparing our AlmaLinux nodes to configuring advanced Nginx load-balancing directives—so that we can build a robust, production-ready environment. Let’s dive in.
Prerequisites
Before starting, make sure our new Ubuntu server is ready. The following components should be installed and configured:
- A AlmaLinux 9 installed dedicated server or KVM VPS.
- A root user or normal user with administrative privileges.
- Two or more backend application servers.
- A domain name with pointing A record to server.
Basic Load Balancing with Nginx on AlmaLinux
1. Preparing Our AlmaLinux Environment
Before we install Nginx, we need to ensure our AlmaLinux servers are up to date and ready:
Update system packages
sudo dnf update -y
sudo dnf upgrade -y
This ensures we’re running the latest kernels, security patches, and utility libraries.
Install EPEL repository (optional)
Some Nginx modules or monitoring tools live in EPEL:
sudo dnf install epel-release -y
EPEL can also provide additional useful packages for observability or health checks.
Open necessary firewall ports
We’ll need port 80 (HTTP) and/or 443 (HTTPS) on our load-balancer node:
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
Configure SELinux contexts (if enforcing)
By default, Nginx should work under enforcing mode, but if you serve custom content from non-standard directories, set the proper context:
sudo semanage fcontext -a -t httpd_sys_content_t '/srv/www/html(/.*)?'
sudo restorecon -Rv /srv/www/html
2. Installing Nginx from the Official Repository
While AlmaLinux ships Nginx in its base repos, it may be an older version. To leverage the latest features and security fixes, we’ll use the official Nginx repo:
Create the repo file
cat <<EOF | sudo tee /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
EOF
Install Nginx
sudo dnf install nginx -y
Enable and start the service
sudo systemctl enable nginx
sudo systemctl start nginx
sudo systemctl status nginx
We should see Nginx active (running). If not, inspect sudo journalctl -u nginx for errors.
3. Designing Our Backend Pool
Load balancing revolves around an upstream block that defines our backend servers:
Identify backend nodes
Suppose we have three application servers at:
10.0.0.11:8080
10.0.0.12:8080
10.0.0.13:8080
Choose a load-balancing method
- Round-robin (default): evenly cycles through backends.
- Least connections: sends to the server with the fewest active sessions.
- IP hash: ensures a client IP always hits the same backend (sticky).
We’ll demonstrate each in the next section.
4. Configuring Nginx for Load Balancing
Edit the main configuration file or, better yet, create /etc/nginx/conf.d/loadbalancer.conf
:
upstream app_servers {
# Default: round-robin
server 10.0.0.11:8080;
server 10.0.0.12:8080;
server 10.0.0.13:8080;
# To use least connections:
# least_conn;
# server 10.0.0.11:8080;
# server 10.0.0.12:8080;
# server 10.0.0.13:8080;
# To use IP hash (sticky):
# ip_hash;
# server 10.0.0.11:8080;
# server 10.0.0.12:8080;
# server 10.0.0.13:8080;
}
server {
listen 80;
server_name loadbalancer.example.com;
location / {
proxy_pass http://app_servers;
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
# WebSocket support if needed:
# proxy_set_header Upgrade \$http_upgrade;
# proxy_set_header Connection "upgrade";
}
# Optional health-check endpoint for status
location /health {
proxy_pass http://app_servers/health;
}
}
Key explanations:
- upstream app_servers defines our pool.
- proxy_pass transparently forwards requests.
- proxy_set_header lines preserve the original client IP and hostname.
- Uncomment least_conn; or ip_hash; to switch algorithms.
5. Testing and Validation
Syntax check and reload
sudo nginx -t && sudo systemctl reload nginx
Simulate traffic
Use a simple loop or tools like ab or wrk:
ab -n 1000 -c 50 http://loadbalancer.example.com/
We’ll watch our backend logs to confirm requests are spread as expected.
Observe connection distribution. On each app node, tail the access logs:
sudo tail -f /var/log/nginx/access.log
We should see roughly equal hits (for round-robin) or as specified by our chosen algorithm.
6. Securing Load Balancer
Install and configure Certbot
Enable the EPEL and Certbot repos
sudo dnf install epel-release -y
sudo dnf install certbot python3-certbot-nginx -y
Obtain a certificate
Assuming our server’s DNS name is loadbalancer.example.com
, run:
sudo certbot certonly --nginx \
--agree-tos \
--email admin@example.com \
-d loadbalancer.example.com
- --nginx lets Certbot verify via a temporary Nginx configuration.
- Certificates are stored under /etc/letsencrypt/live/loadbalancer.example.com/.
Auto-renewal test
sudo certbot renew --dry-run
This ensures the renewal cronjob will work when certificates near expiry.
Update Nginx for HTTPS
Edit our /etc/nginx/conf.d/loadbalancer.conf
(or equivalent) to add an SSL-enabled server block alongside the existing one.
upstream app_servers {
server 10.0.0.11:8080;
server 10.0.0.12:8080;
server 10.0.0.13:8080;
}
# Redirect all HTTP to HTTPS
server {
listen 80;
server_name loadbalancer.example.com;
return 301 https://$host$request_uri;
}
# HTTPS server
server {
listen 443 ssl http2;
server_name loadbalancer.example.com;
# SSL certificate paths from Certbot
ssl_certificate /etc/letsencrypt/live/loadbalancer.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/loadbalancer.example.com/privkey.pem;
# Recommended SSL settings
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Proxy settings remain as before
location / {
proxy_pass http://app_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
# (Optional) Health check
location /health {
proxy_pass http://app_servers/health;
}
}
What we’ve done:
- Redirect HTTP → HTTPS: any request on port 80 goes to the secure equivalent.
- Enabled SSL on port 443: with
ssl_certificate
andssl_certificate_key
pointing to Let’s Encrypt files. - Hardened protocols and ciphers: enforce TLS 1.2+ and strong cipher suites.
Visit
https://loadbalancer.example.com
7. Implement health checks
Nginx Plus offers active health checks; with the open-source edition, we rely on passive checks and upstream parameters like max_fails and fail_timeout:
upstream app_servers {
server 10.0.0.11:8080 max_fails=3 fail_timeout=30s;
server 10.0.0.12:8080 max_fails=3 fail_timeout=30s;
server 10.0.0.13:8080 max_fails=3 fail_timeout=30s;
}
Autoscaling considerations
- In cloud environments, we can dynamically regenerate the upstream block via tools like Consul Template or an API gateway to add/remove backends in real time.
8. Monitoring and Maintenance
Expose status
Enable the stub status module to monitor active connections:
location /nginx_status {
stub_status on;
allow 127.0.0.1;
deny all;
}
Then query curl http://localhost/nginx_status
.
Log rotation: Ensure /etc/logrotate.d/nginx
is configured to rotate access and error logs daily or weekly.
Regular updates: We should routinely dnf update nginx and test configs in a staging environment before production rollouts.
In this tutorial, we've implemented basic load balancing with Nginx on AlmaLinux 9 server. By following these steps, we’ve built a fully functional Nginx load-balancer on AlmaLinux that can distribute traffic reliably, handle failures gracefully, and grow with our application demands. Whether we’re running just two backend servers or twenty, this pattern keeps our infrastructure resilient, high-performant, and ready for modern cloud-native workloads.