Simplified NGINX Load Balancing with Loadcat

Web applications that are designed to be horizontally scalable often require one or more load balancing nodes. Their primary purpose is to distribute the incoming traffic across available web servers in a fair manner. The ability to increase the overall capacity of a web application simply by increasing the number of nodes and having the load balancers adapt to this change can prove to be tremendously useful in production.

NGINX is a web server that offers high performance load balancing features, among many of its other capabilities. Some of those features are only available as a part of their subscription model, but the free and open source version is still very feature rich and comes with the most essential load balancing features out of the box.

In this tutorial, we will explore the inner mechanics of an experimental tool that allows you to configure your NGINX instance on the fly to act as a load balancer, abstracting away all the nitty-gritty details of NGINX configuration files by providing a neat web-based user interface. The purpose of this article is to show how easy it is to start building such a tool. It is worth mentioning that project Loadcat is inspired heavily by Linode’s NodeBalancers.

NGINX, Servers and Upstreams

One of the most popular uses of NGINX is reverse-proxying requests from clients to web server applications. Although web applications developed in programming languages like Node.js and Go can be self-sufficient web servers, having a reverse-proxy in front of the actual server application provides numerous benefits. A “server” block for a simple use case like this in an NGINX configuration file can look something like this:

server {

location / {

This would make NGINX listen on port 80 for all requests that are pointed to and pass each of them to some web server application running at We could also use the loopback IP address here if the web application server was running locally. Please note that the snippet above lacks some obvious tweaks that are often used in reverse-proxy configuration, but is being kept this way for brevity.

But what if we wanted to balance all incoming requests between two instances of the same web application server? This is where the “upstream” directive becomes useful. In NGINX, with the “upstream” directive, it is possible to define multiple back-end nodes among which NGINX will balance all incoming requests. For example:

upstream nodes {
server {

location / {

Notice how we defined an “upstream” block, named “nodes”, consisting of two servers. Each server is identified by an IP address and the port number they are listening on. With this, NGINX becomes a load balancer in its simplest form. By default, NGINX will distribute incoming requests in a round-robin fashion, where the first one will be proxied to the first server, the second one to the second server, the third one to the first server and so on.

However, NGINX has much more to offer when it comes to load balancing. It allows you to define weights for each server, mark them as temporarily unavailable, choose a different balancing algorithm (e.g. there is one that works based on client’s IP hash), etc. These features and configuration directives are all nicely documented at Furthermore, NGINX allows configuration files to be changed and reloaded on-the-fly with almost no interruption.

NGINX’s configurability and simple configuration files make it really easy to adapt it to many needs. And a plethora of tutorials already exist on the Internet that teach you exactly how to configure NGINX as a load balancer.

Loadcat: NGINX Configuration Tool

There is something fascinating about programs that instead of doing something on their own, configure other tools to do it for them. They do not really do much other than maybe take user inputs and generate a few files. Most of the benefits that you reap from those tools are in fact features of other tools. But, they certainly make life easy. While trying to setup a load balancer for one of my own projects, I wondered: why not do something similar for NGINX and its load balancing capabilities?

Loadcat was born!

Loadcat, built with Go, is still in its infancy. At this moment, the tool allows you to configure NGINX for load balancing and SSL termination only. It provides a simple web-based GUI for the user. Instead of walking through individual features of the tool, let us take a peek at what is underneath. Be aware though, if someone enjoys working with NGINX configuration files by hand, they may find little value in such a tool.

There are a few reasons behind choosing Go as the programming language for this. One of them is that Go produces compiled binaries. This allows us to build and distribute or deploy Loadcat as a compiled binary to remote servers without worrying about resolving dependencies. Something that greatly simplifies the setup process. Of course, the binary assumes that NGINX is already installed and a systemd unit file exists for it.

In case you are not a Go engineer, do not worry at all. Go is quite easy and fun to get started with. Moreover, the implementation itself is very straightforward and you should be able to follow along easily.


Go build tools impose a few restrictions on how you can structure your application and leave the rest to the developer. In our case, we have broken things into a few Go packages based on their purposes:

  • cfg: loads, parses, and provides configuration values
  • cmd/loadcat: main package, contains the entry point, compiles into binary
  • data: contains “models”, uses an embedded key/value store for persistence
  • feline: contains core functionality, e.g. generation of configuration files, reload mechanism, etc.
  • ui: contains templates, URL handlers, etc.

If we take a closer look at the package structure, especially within the feline package, we will notice that all NGINX specific code has been kept within a subpackage feline/nginx. This is done so that we can keep the rest of the application logic generic and extend support for other load balancers (e.g. HAProxy) in the future.

Entry Point

Let us start from the main package for Loadcat, found within “cmd/loadcatd”. The main function, entry point of the application, does three things.

func main() {
	fconfig := flag.String("config", "loadcat.conf", "")

	feline.SetBase(filepath.Join(cfg.Current.Core.Dir, "out"))

	data.OpenDB(filepath.Join(cfg.Current.Core.Dir, "loadcat.db"))
	defer data.DB.Close()

	http.Handle("/api", api.Router)
	http.Handle("/", ui.Router)

	go http.ListenAndServe(cfg.Current.Core.Address, nil)

	// Wait for an “interrupt“ signal (Ctrl+C in most terminals)

To keep things simple and make the code easier to read, all error handling code has been removed from the snippet above (and also from the snippets later in this article).

As you can tell from the code, we are loading the configuration file based on the “-config” command line flag (which defaults to “loadcat.conf” in the current directory). Next, we are initializing a couple of components, namely the core feline package and the database. Finally, we are starting a web server for the web-based GUI.


Loading and parsing the configuration file is probably the easiest part here. We are using TOML to encode configuration information. There is a neat TOML parsing package available for Go. We need very little configuration information from the user, and in most cases we can determine sane defaults for these values. The following struct represents the structure of the configuration file:

struct {
	Core struct {
		Address string
		Dir string
	Nginx struct {
		Systemd struct {
			Service string

And, here is what a typical “loadcat.conf” file may look like:




As we can see, there is a similarity between the structure of the TOML-encoded configuration file and the struct shown above it. The configuration package begins by setting some sane defaults for certain fields of the struct and then parses the configuration file over it. In case it fails to find a configuration file at the specified path, it creates one, and dumps the default values in it first.

func LoadFile(name string) error {
	f, _ := os.Open(name)
	if os.IsNotExist(err) {
		f, _ = os.Create(name)
		return nil
	return nil

Data and Persistence

Meet Bolt. An embedded key/value store written in pure Go. It comes as a package with a very simple API, supports transactions out of the box, and is disturbingly fast.

Within package data, we have structs representing each type of entity. For example, we have:

type Balancer struct {
	Id bson.ObjectId
	Settings BalancerSettings

type Server struct {
	Id bson.ObjectId
	BalancerId bson.ObjectId
	Settings ServerSettings

… where an instance of Balancer represents a single load balancer. Loadcat effectively allows you to balance requests for multiple web applications through a single instance of NGINX. Every balancer can then have one or more servers behind it, where each server can be a separate back-end node.

Since Bolt is a key-value store, and doesn’t support advanced database queries, we have application-side logic that does this for us. Loadcat is not meant for configuring thousands of balancers with thousands of servers in each of them, so naturally this naive approach works just fine. Also, Bolt works with keys and values that are byte slices, and that is why we BSON-encode the structs before storing them in Bolt. The implementation of a function that retrieves a list of Balancer structs from the database is shown below:

func ListBalancers() ([]Balancer, error) {
	bals := []Balancer{}
	DB.View(func(tx *bolt.Tx) error {
		b := tx.Bucket([]byte("balancers"))
		c := b.Cursor()
		for k, v := c.First(); k != nil; k, v = c.Next() {
			bal := Balancer{}
			bson.Unmarshal(v, &bal)
			bals = append(bals, bal)
		return nil
	return bals, nil

ListBalancers function starts a read-only transaction, iterates over all the keys and values within the “balancers” bucket, decodes each value to an instance of Balancer struct and returns them in an array.

Storing a balancer in the bucket is almost equally simple:

func (l *Balancer) Put() error {
	if !l.Id.Valid() {
		l.Id = bson.NewObjectId()
	if l.Label == "" {
		l.Label = "Unlabelled"
	if l.Settings.Protocol == "https" {
		// Parse certificate details
	} else {
		// Clear fields relevant to HTTPS only, such as SSL options and certificate details
	return DB.Update(func(tx *bolt.Tx) error {
		b := tx.Bucket([]byte("balancers"))
		p, err := bson.Marshal(l)
		if err != nil {
			return err
		return b.Put([]byte(l.Id.Hex()), p)

The Put function assigns some default values to certain fields, parses the attached SSL certificate in HTTPS setup, begins a transaction, encodes the struct instance and stores it in the bucket against the balancer’s ID.

While parsing the SSL certificate, two pieces of information are extracted using standard package encoding/pem and stored in SSLOptions under the Settings field: the DNS names and the fingerprint.

We also have a function that looks up servers by balancer:

func ListServersByBalancer(bal *Balancer) ([]Server, error) {
	srvs := []Server{}
	DB.View(func(tx *bolt.Tx) error {
		b := tx.Bucket([]byte("servers"))
		c := b.Cursor()
		for k, v := c.First(); k != nil; k, v = c.Next() {
			srv := Server{}
			bson.Unmarshal(v, &srv)
			if srv.BalancerId.Hex() != bal.Id.Hex() {
			srvs = append(srvs, srv)
		return nil
	return srvs, nil

This function shows how naive our approach really is. Here, we are effectively reading the entire “servers” bucket and filtering out the irrelevant entities before returning the array. But then again, this works just fine, and there is no real reason to change it.

The Put function for servers is much simpler than that of Balancer struct as it doesn’t require as many lines of code setting defaults and computed fields.

Controlling NGINX

Before using Loadcat, we must configure NGINX to load the generated configuration files. Loadcat generates “nginx.conf” file for each balancer under a directory by the balancer’s ID (a short hex string). These directories are created under an “out” directory at cwd. Therefore, it is important that you configure NGINX to load these generated configuration files. This can be done using an “include” directive inside the “http” block:

Edit /etc/nginx/nginx.conf and add the following line at the end of the “http” block:

http {

This will cause NGINX to scan all the directories found under “/path/to/out/”, look for files named “nginx.conf” within each directory, and load each one that it finds.

In our core package, feline, we define an interface Driver. Any struct that provides two functions, Generateand Reload, with the correct signature qualifies as a driver.

type Driver interface {
	Generate(string, *data.Balancer) error
	Reload() error

For example, the struct Nginx under the feline/nginx packages:

type Nginx struct {

	Systemd *dbus.Conn

func (n Nginx) Generate(dir string, bal *data.Balancer) error {
	// Acquire a lock on n.Mutex, and release before return

	f, _ := os.Create(filepath.Join(dir, "nginx.conf"))
	TplNginxConf.Execute(f, /* template parameters */)

	if bal.Settings.Protocol == "https" {
		// Dump private key and certificate to the output directory (so that Nginx can find them)

	return nil

func (n Nginx) Reload() error {
	// Acquire a lock on n.Mutex, and release before return

	switch cfg.Current.Nginx.Mode {
	case "systemd":
		if n.Systemd == nil {
			c, err := dbus.NewSystemdConnection()
			n.Systemd = c

		ch := make(chan string)
		n.Systemd.ReloadUnit(cfg.Current.Nginx.Systemd.Service, "replace", ch)

		return nil

		return errors.New("unknown Nginx mode")

Generate can be invoked with a string containing the path to the output directory and a pointer to a Balancerstruct instance. Go provides a standard package for text templating, which the NGINX driver uses to generate the final NGINX configuration file. The template consists of an “upstream” block followed by a “server” block, generated based on how the balancer is configured:

var TplNginxConf = template.Must(template.New("").Parse(`
upstream {{.Balancer.Id.Hex}} {
	{{if eq .Balancer.Settings.Algorithm "least-connections"}}
	{{else if eq .Balancer.Settings.Algorithm "source-ip"}}
	{{range $srv := .Balancer.Servers}}
		server{{$srv.Settings.Address}} weight={{$srv.Settings.Weight}} {{if eq $srv.Settings.Availability "available"}}{{else if eq $srv.Settings.Availability "backup"}}backup{{else if eq $srv.Settings.Availability "unavailable"}}down{{end}};
server {
	{{if eq .Balancer.Settings.Protocol "http"}}
	{{else if eq .Balancer.Settings.Protocol "https"}}
		listen{{.Balancer.Settings.Port}} ssl;
	{{if eq .Balancer.Settings.Protocol "https"}}
	location / {
		proxy_set_headerHost $host;
		proxy_set_headerX-Real-IP $remote_addr;
		proxy_set_headerX-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_headerX-Forwarded-Proto $scheme;
		proxy_set_headerUpgrade $http_upgrade;
		proxy_set_headerConnection 'upgrade';

Reload is the other function on Nginx struct that makes NGINX reload the configuration files. The mechanism used is based on how Loadcat is configured. By default it assumes NGINX is a systemd service running as nginx.service, such that [sudo] systemd reload nginx.service would work. However instead of executing a shell command, it establishes a connection to systemd through D-Bus using the package

Web-based GUI

With all these components in place, we’ll wrap it all up with a plain Bootstrap user interface.

For these basic functionalities, a few simple GET and POST route handlers are sufficient:

GET /balancers
GET /balancers/new
POST /balancers/new
GET /balancers/{id}
GET /balancers/{id}/edit
POST /balancers/{id}/edit
GET /balancers/{id}/servers/new
POST /balancers/{id}/servers/new
GET /servers/{id}
GET /servers/{id}/edit
POST /servers/{id}/edit

Going over each individual route may not be the most interesting thing to do here, since these are pretty much the CRUD pages. Feel absolutely free to take a peek at the package ui code to see how handlers for each of these routes have been implemented.

Each handler function is a routine that either:

  • Fetches data from the datastore and responds with rendered templates (using the fetched data)
  • Parses incoming form data, makes necessary changes in the datastore and uses package feline to regenerate the NGINX configuration files

For example:

func ServeServerNewForm(w http.ResponseWriter, r *http.Request) {
	vars := mux.Vars(r)
	bal, _ := data.GetBalancer(bson.ObjectIdHex(vars["id"]))

	TplServerNewForm.Execute(w, struct {
		Balancer *data.Balancer
		Balancer: bal,

func HandleServerCreate(w http.ResponseWriter, r *http.Request) {
	vars := mux.Vars(r)
	bal, _ := data.GetBalancer(bson.ObjectIdHex(vars["id"]))

	body := struct {
		Labelstring `schema:"label"`
		Settings struct {
			Address string `schema:"address"`
		} `schema:"settings"`
	schema.NewDecoder().Decode(&body, r.PostForm)

	srv := data.Server{}
	srv.BalancerId = bal.Id
	srv.Label = body.Label
	srv.Settings.Address = body.Settings.Address

	http.Redirect(w, r, "/servers/"+srv.Id.Hex()+"/edit", http.StatusSeeOther)

All ServeServerNewForm function does is it fetches a balancer from the datastore and renders a template, TplServerList in this case, which retrieves the list of relevant servers using the Servers function on the balancer.

HandleServerCreate function, on the other than, parses the incoming POST payload from the body into a struct and uses those data to instantiate and persist a new Server struct in the datastore before using package feline to regenerate NGINX configuration file for the balancer.

All page templates are stored in “ui/templates.go” file and corresponding template HTML files can be found under the “ui/templates” directory.

Trying It Out

Deploying Loadcat to a remote server or even in your local environment is super easy. If you are running Linux (64bit), you can grab an archive with a pre-built Loadcat binary from the repository’s Releases section. If you are feeling a bit adventurous, you can clone the repository and compile the code yourself. Although, the experience in that case may be a bit disappointing as compiling Go programs is not really a challenge. And in case you are running Arch Linux, then you are in luck! A package has been built for the distribution for convenience. Simply download it and install it using your package manager. The steps involved are outlined in more details in the project’s file.

Once you have Loadcat configured and running, point your web browser to “http://localhost:26590” (assuming it is running locally and listening on port 26590). Next, create a balancer, create a couple of servers, make sure something is listening on those defined ports, and voila you should have NGINX load balance incoming requests between those running servers.

What’s Next?

This tool is far from perfect, and in fact it is quite an experimental project. The tool doesn’t even cover all basic functionalities of NGINX. For example, if you want to cache assets served by the back-end nodes at NGINX layer, you will still have to modify NGINX configuration files by hand. And that is what makes things exciting. There is a lot that can be done here and that is exactly what’s next: covering even more of NGINX’s load balancing features - the basic ones and probably even ones that NGINX Plus has to offer.

This article is currently posted at Toptal


NetworkingMichelle Younggolang, Loadbalancer, NGINX, nginx, nginx vs apache, nginx reverse proxy, nginx proxy_pass, nginx location, nginx docker, nginx tutorial, nginx rewrite, nginx upstream, nginx download, nginx redirect, nginx access log, nginx alias, nginx add_header, nginx authentication, nginx autoindex, nginx access-control-allow-origin, nginx access log format, nginx api gateway, nginx auth_basic, nginx apache, nginx bad gateway, nginx basic auth, nginx block ip, nginx browser caching, nginx brotli, nginx book, nginx break, nginx basic config, nginx block user agent, nginx benchmark, nginx config, nginx configuration, nginx config location, nginx cache, nginx client_max_body_size, nginx cors, nginx conf, nginx check config, nginx cache control, nginx centos, nginx default config, nginx documentation, nginx default_server, nginx dockerfile, nginx default server, nginx django, nginx debug, nginx load balancing, nginx load balancing ssl, nginx load balancing docker, nginx load balancing failover, nginx load balancing tomcat, nginx load balancing configuration, nginx load balancing aws, nginx load balancing cluster, nginx load balancing websocket, nginx load balancing tcp, nginx load balancing example, nginx load balancing digitalocean, nginx load balancing algorithms, nginx load balancing api, nginx load balancing active passive, nginx load balancing architecture, nginx load balancing apache, nginx load balancing, nginx load balancing and failover, nginx load balancer authentication, nginx load balancer alternative, nginx load balancing backup, nginx load balancing based on url, nginx load balancing benchmark, nginx load balancing by ip, nginx load balancing bottleneck, nginx load balancer best practices, nginx load based balancing, nginx load balancer 502 bad gateway, nginx cookie based load balancing, nginx load balancer bandwith, nginx load balancing config, nginx load balancing centos, nginx load balancing cookie, nginx load balancing centos 7, nginx load balancing cache, nginx load balancing code, nginx load balancing coreos, nginx load balancing centos 6, nginx load balancer check, nginx load balancing dns, nginx load balancing django, nginx load balancing download, nginx load balancing different port, nginx load balancing dynamic, nginx load balancing docker swarm, nginx load balancing documentation, nginx load balancing fastcgi, nginx load balancing free, nginx load balancing fair, nginx load balancing fallback, nginx load balancing failover configuration, nginx load balancing failure, nginx load balancer features, nginx load balancer fcgi, nginx load balancing php-fpm, nginx load balancing gui, nginx load balancing geoip, nginx load balancing glassfish, nginx load balancing guide, nginx load balancing gzip, nginx load balancing geo, nginx load balancing gunicorn, nginx load balancer geolocation, nginx global load balancing, nginx load balancing https, nginx load balancing health check, nginx load balancing header, nginx load balancing ha, nginx load balancing high availability, nginx load balancing heartbeat, nginx load balancing health, nginx load balancer hardware requirements, nginx load balancer host header, nginx load balancer howto, nginx load balancing iis, nginx load balancing ip hash, nginx load balancing issues, nginx load balancing interface, nginx load balancer install, nginx load balance imap, nginx load balancing real ip, nginx load balancing, nginx intelligent load balancing, load balancing nginx in centos, nginx load balancing jsessionid, nginx load balancing jboss, nginx load balancing node js, nginx jetty load balancing, nginx load balancing kubernetes, nginx load balancing keepalive, nginx load balancing keepalived, nginx load balancing log, nginx load balancing least connections, nginx load balancing localhost, nginx load balancing layer 7, nginx load balancing laravel, nginx load balancing ldap, nginx load balancing limits, nginx load balancing lua, nginx load balancer latency, nginx load balancing methods, nginx load balancing monitoring, nginx load balancing mysql, nginx load balancing microservices, nginx load balancing multiple sites, nginx load balancing multiple domains, nginx load balancing module, nginx load balancing mode, nginx load balancing method redefined, nginx load balancing mariadb, nginx load balancing not working, nginx load balancing node, nginx load balancer port number, nginx not load balancing, nginx load balancing options, nginx load balancing open source, nginx load balancing on the fly, nginx load balancing on centos, nginx load balancing override, nginx tcp load balancing open source, digitalocean nginx load balancing, nginx or haproxy load balancing, nginx load balancing performance, nginx load balancing proxy, nginx load balancing ports, nginx load balancing php session, nginx load balancing php, nginx load balancing pacemaker, nginx load balancing plugin, nginx load balancer passenger, nginx load balancer priority, nginx load balancing round robin, nginx load balancing rabbitmq, nginx load balancing redirect, nginx load balancing redis, nginx load balancing rtmp, nginx load balancing rails, nginx load balancing round, nginx load balancing recovery, nginx load balancer redundancy, nginx load balancing sticky cookie, nginx load balancing setup, nginx load balancing strategy, nginx load balancing session, nginx load balancing status, nginx load balancing static files, nginx load balancing server down, nginx load balancing timeout, nginx load balancing types, nginx load balancing tutorial, nginx load balancing tuning, nginx load balancing test, nginx load balancing throughput, nginx load balancing ssl termination, nginx load balancing udp, nginx load balancing upstream, nginx load balancing ubuntu, nginx load balancing url rewrite, nginx load balancing url, nginx load balancing uwsgi, nginx load balancing upstream port, nginx load balancing unicorn, nginx load balancer ubuntu 14.04, understanding nginx load balancing, nginx load balancing vs elb, nginx load balancing vs haproxy, nginx load balancing virtual host, nginx load balancing vs apache, nginx load balancing vagrant, load balancing varnish nginx, load balancing varnish vs nginx, nginx load balancing weight, nginx load balancing wordpress, nginx load balancing with apache, nginx load balancing with ssl termination, nginx load balancing web services, nginx load balancing with gunicorn, nginx load balancer without ssl termination, nginx load balancer wiki, load balance web server nginx, nginx load balancing x-forwarded-for, nginx load balancing youtube, nginx 1.9 tcp load balancing, nginx load balancer 301, nginx load balancing 404, nginx load balancer 404 error, nginx load balancer layer 4, nginx load balancer 502, nginx load balancing error 500, nginx load balancer port 80, nginx redirect http to https, nginx amplify, nginx alpine, nginx configuration file, nginx clear cache, nginx daemon off, nginx default port, nginx error log, nginx enable site, nginx error, nginx error_page, nginx environment variables, nginx expires, nginx error_log, nginx enable ssl, nginx etag, nginx error page, nginx for windows, nginx forward proxy, nginx fastcgi, nginx flask, nginx force https, nginx forbidden, nginx free, nginx follow symlinks, nginx file not found, nginx file server, nginx gzip, nginx github, nginx gateway timeout, nginx gzip compression, nginx geoip, nginx gunicorn, nginx gui, nginx getting started, nginx geo, nginx golang, nginx https, nginx http 2, nginx htaccess, nginx htpasswd, nginx health check, nginx https redirect, nginx host not found in upstream, nginx hsts, nginx https proxy, nginx http auth, nginx ingress controller, nginx if, nginx install, nginx install ubuntu, nginx include, nginx if else, nginx ingress, nginx index, nginx ipv6, nginx internal, nginx jwt, nginx java, nginx javascript, nginx jenkins, nginx jobs, nginx jwt authentication, nginx json log, nginx jira, nginx json, nginx jetty, nginx keepalive_timeout, nginx keepalive, nginx kubernetes, nginx kibana, nginx kerberos, nginx kong, nginx kubernetes ingress, nginx keepalived, nginx keepalive upstream, nginx kafka, nginx logs, nginx letsencrypt, nginx log format, nginx lua, nginx location regex, nginx listen, nginx latest version, nginx log location, nginx map, nginx modules, nginx multiple server_name, nginx max upload size, nginx microservices, nginx mac, nginx multiple sites, nginx multiple locations, nginx multi_accept, nginx multiple server blocks, nginx nodejs, nginx nested location, nginx no resolver defined to resolve, nginx no input file specified, nginx no events section in configuration, nginx no cache, nginx not working, nginx named location, nginx nodejs reverse proxy, nginx not starting, nginx open source, nginx on windows, nginx or apache, nginx on mac, nginx oauth2, nginx on ubuntu, nginx optimization, nginx openssl, nginx options, nginx oauth2 module, nginx proxy, nginx php, nginx plus, nginx php-fpm, nginx python, nginx password protect, nginx proxy_pass https, nginx port, nginx permission denied, nginx quic, nginx query string, nginx quick start, nginx query parameters, nginx query_string, nginx queue, nginx quit, nginx qnap, nginx que es, nginx query string rewrite, nginx restart, nginx reload, nginx reverse proxy example, nginx rtmp, nginx resolver, nginx reload config, nginx server, nginx ssl, nginx server_name, nginx status, nginx subdomain, nginx ssl certificate, nginx serve static files, nginx sites-enabled, nginx start, nginx stop, nginx try_files, nginx test config, nginx timeout, nginx tcp proxy, nginx timeout increase, nginx tls 1.3, nginx too many open files, nginx tomcat, nginx tls, nginx unit, nginx ubuntu, nginx user, nginx url rewrite, nginx uwsgi, nginx upstream timed out, nginx upload limit, nginx upstream timeout, nginx uwsgi flask, nginx variables, nginx version, nginx virtual host, nginx vs haproxy, nginx vs nginx plus, nginx vs node, nginx vs tomcat, nginx vulnerabilities, nginx version check, nginx windows, nginx web server, nginx wiki, nginx websocket, nginx wordpress, nginx worker_processes, nginx waf, nginx wsgi, nginx worker processes, nginx webdav, nginx x-forwarded-for, nginx x-forwarded-proto, nginx x-frame-options, nginx x-real-ip, nginx x-forwarded-host, nginx x-frame-options allow-from, nginx x-accel-redirect, nginx x-xss-protection, nginx x-forwarded-port, nginx xdebug, nginx yum repo, nginx youtube, nginx yum, nginx yaml, nginx yum install, nginx yii2, nginx yaml config, nginx yocto, nginx yubikey, nginx yii, nginx zone, nginx zip, nginx zipkin, nginx zookeeper, nginx zabbix, nginx zero downtime deploy, nginx zero downtime reload, nginx zlib, nginx zuul, nginx zeromq, nginx, nginx 0.5.6, nginx 000, nginx 009, nginx 0.7.67 exploit, nginx 0.7.65 exploit, nginx 0day, nginx 0.8.54 exploit, nginx 0.7.67 vulnerabilities, nginx 0.7.65 vulnerability, nginx 1.10.3, nginx 1.12.1, nginx 1.6.3 exploit, nginx 1.6.2, nginx 1.10.3 vulnerabilities, nginx 1.12, nginx 111 connection refused, nginx 13 permission denied, nginx 1.10.3 exploit, nginx 1.4.6, nginx 2fa, nginx 2017, nginx 2, nginx 206, nginx 204, nginx 2 server names, nginx 2 servers, nginx 2.0, nginx 2fa auth, nginx 2f, nginx 301 redirect, nginx 301, nginx 301 moved permanently, nginx 304, nginx 301 redirect https, nginx 3rd party modules, nginx 307, nginx 302 found, nginx 304 not modified, nginx 301 too many redirects, nginx 403 forbidden, nginx 499, nginx 404, nginx 413, nginx 404 page, nginx 400 bad request, nginx 405 not allowed, nginx 499 error, nginx 404 redirect, nginx 403 forbidden php, nginx 502 bad gateway, nginx 504, nginx 503, nginx 502 bad gateway php, nginx 500 internal server error, nginx 504 gateway timeout php, nginx 503 service unavailable, nginx 502 bad gateway django, nginx 504 timeout, nginx 502 bad gateway node, nginx 60 second timeout, nginx 64 bit windows, nginx 61 connection refused, nginx 64 bit linux, nginx 60 operation timed out, nginx 6, nginx 60s timeout, nginx 64mb vps, nginx 65535, nginx 644, nginx 768 worker_connections are not enough, nginx 7, nginx 755, nginx 720p, nginx centos 7, nginx php 7, nginx drupal 7, nginx debian 7, nginx windows 7, nginx rhel 7, nginx 80 to 443, nginx 8080, nginx 80, nginx 80 to 8080, nginx 8443, nginx 80 already in use, nginx 8nv, nginx 80 port, nginx 8080 connection refused, nginx 98 address already in use, nginx 9000, nginx 99, nginx 9, nginx 98 already in use, nginx 98, nginx 9090, nginx 9443, nginx 9gag, 92five nginx1 Comment