Nginx | Core Module | Upstream
Commands
# Validate (-t) and print (-T) the currently running config
nginx -T
# Signal : reload, ...
nginx -s SIGNAL
E.g., dump valid/total config of first-found container of rpx
service
docker exec -it $(docker ps -q --filter name=rpx -n 1) sh -c 'nginx -T' > 'nginx-T.log'
References @ nginx.info.conf
Top 10 Mistakes
Dockerfile
The entrypoint for a containerized NGINX server should be:
nginx -g daemon off
This is required for nginx to run in the foreground, else the container stops immediately after starting!
CMD ["nginx", "-g", "daemon off;"]
Reverse Proxy
Expose a service running at 127.0.0.1:3000
(localhost
)
http {
server {
listen 80;
server_name example.com;
location /foo/ {
proxy_pass http://127.0.0.1:3000/bar/;
# Universally essential headers (sort of)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
- Okay to use either IP address or domain name
Using the upstream{}
block
http {
upstream auth_svc {
zone upstreams 64K;
server 127.0.0.1:3000 max_fails=1 fail_timeout=2s;
keepalive 2;
}
server {
listen 8080;
server_name example.com;
location / {
proxy_set_header Host $host;
proxy_pass http://auth_svc/;
proxy_next_upstream error timeout http_500;
}
}
}
Set
max_fails
else Nginx dies when an upstream service stops.docker service update --replicas=0
At Docker
service
,stack
andswarm
, the upstream reference,auth_svc
, is the Docker service name, and the upstreamserver IP_ADDR:PORT
references the container. That port, perhaps set perdocker
YAML, must also match the application configuration of course. Theserver {listen 8080; ...}
would also be the container port; typically set to8080
or so.ports: - 80:8080 - 443:8443
The service-name referencing allows Nginx to keep up with container changes lest the service itself is terminated;
docker service stop
/start ...
commands are okay. Andswarm
handles the load balancing.
Routing
location /THIS/
versus proxy_pass .../THAT/
upstream server_reference {
# Server declaration (container perspective)
}
server {
listen 8080
server cdn.example.com
location /THIS/ {
proxy_pass <scheme>://<server_reference>/THAT/
}
}
- The
/THIS/
is replaced by the/THAT/
- The request from downstream (web) is of
/THIS/
. - The request sent upstream is of
/THAT/
.- Upstream is the application server.
- Up/Down is from client perspective, as in UPload versus DOWNload.
- Upstream is the application server.
- The request from downstream (web) is of
Multiple Services
Using the above service declarations as a template, repeat as necessary …
http {
server {
listen 80;
server_name example.com;
location / {
proxy_set_header Host $host;
proxy_pass http://localhost:3000/;
}
location /svcX/v1/ {
proxy_set_header Host $host;
proxy_pass http://localhost:4000/foo/;
}
location /svcY/ {
proxy_bind 127.0.0.2; # declare the network adapter
proxy_pass http://example.com/app2/;
}
}
}
- @ Docker
stack
(swarm
)location
regards request from downstream (web/client)proxy_pass
regards request sent upstream (@ CTNR)
Wordpress-Nginx .conf
examples | Search GitHub
Nginx Config Generator : NGINXConfig.io
Nginx : ngx_http_upstream_module
: keepalive
… and associated params:
upstream SERVICE_{
server SERVICE_NAME:SERVICE_PORT;
keepalive 2x_REPLICAS;
}
server {
...
location /THIS/ {
proxy_pass http://SERVICE_THAT/;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
/THIS/
is the request sent by the client (downstream)/THAT/
is the request recieved by the server (upstream)SERVICE_PORT
is that of container, not that exposed.I.e., @ Docker Compose file
... ports: - 7770:${SERVICE_PORT}
We recommend setting the parameter to twice the number of servers listed in the
upstream{}
block. This is large enough for NGINX to maintain keepalive connections with all the servers, but small enough that upstream servers can process new incoming connections as well. — https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#no-keepalives)
ERR : Broken Pipe
The broken pipe is a TCP/IP error occurring when you write to a stream where the other end (the peer) has closed the underlying connection.
core_pwa.1.ybx10pk8sguo@docker-desktop | PWA : 2022/09/07 15:10:28.609143 logger.go:54: (200) : GET /app/start -> 10.0.3.6:53812 (60.1168ms)
core_pwa.1.ybx10pk8sguo@docker-desktop | PWA : 2022/09/07 15:10:28.739194 errors.go:33: 0000000… : ERR : write tcp 10.0.3.3:3030->10.0.3.6:53828: write: broken pipe
core_pwa.1.ybx10pk8sguo@docker-desktop | PWA : 2022/09/07 15:10:28.739337 main.go:269: main: @ PWA Shutdown per signal: terminated
core_pwa.1.ybx10pk8sguo@docker-desktop | PWA : 2022/09/07 15:10:28.742779 errors.go:33: 0000000… : ERR : write tcp 10.0.3.3:3030->10.0.3.6:53816: write: broken pipe
core_pwa.1.ybx10pk8sguo@docker-desktop | PWA : 2022/09/07 15:10:28.751308 errors.go:33: 0000000… : ERR : write tcp 10.0.3.3:3030->10.0.3.6:53824: write: broken pipe
core_pwa.1.ybx10pk8sguo@docker-desktop | PWA : 2022/09/07 15:10:33.739488 main.go:156: main: @ PWA Shutdown : Disconnecting from 'db1' @ host : 'pg1'
core_pwa.1.ybx10pk8sguo@docker-desktop | PWA : 2022/09/07 15:10:33.741392 main.go:278: main: Completed
core_pwa.1.ybx10pk8sguo@docker-desktop | PWA : 2022/09/07 15:10:33.741418 main.go:66: main: error: could not stop PWA server gracefully: context deadline exceeded
/rpx_status
Our alias of /basic_status
, so that standard returns 404 regardless.
Still, denies all but for HQ and such.
☩ curl https://swarm.foo/rpx_status
Active connections: 1
server accepts handled requests
3 3 45
Reading: 0 Writing: 1 Waiting: 0
Proxy FTP
Nginx doesn't support proxying to FTP servers. At best, you can proxy the socket... and this is a real hassle with regular old FTP due to it opening new connections on random ports every time a file is requested.
What you can probably do instead is create a FUSE mount to that FTP server on some local path, and serve that path with Nginx like normal. To that end, CurlFtpFS is one tool for this. Tutorial: https://linuxconfig.org/mount-remote-ftp-directory-host-locally-into-linux-filesystem
PROXY Protocol
The PROXY protocol … to receive client connection information passed through proxy servers and load balancers such as HAproxy and Amazon Elastic Load Balancer (ELB).
With the PROXY protocol, NGINX can learn the originating IP address from HTTP, SSL, HTTP/2, SPDY, WebSocket, and TCP. Knowing the originating IP address of a client may be useful for setting a particular language for a website, keeping a denylist of IP addresses, or simply for logging and statistics purposes.
The information passed via the PROXY protocol is the client IP address, the proxy server IP address, and both port numbers.