This guide details my experience with Docker networking on a self-hosted TrueNAS SCALE server (v25.04) and how I configured Nginx Proxy Manager to communicate directly with other application containers.
Introduction
In past projects using docker stack, I’ve always configured Nginx to proxy
traffic to other containers within a shared Docker network. This is efficient
and secure. When setting up my TrueNAS SCALE server, I wanted to replicate this
pattern using Nginx Proxy Manager (NPM).
The Problem: Inefficient Network Hairpinning
My initial setup on TrueNAS involved running Nginx Proxy Manager (NPM) and various other apps as separate containers. To make them accessible, I configured NPM to proxy traffic to each app.
However, the default TrueNAS UI setup led to an inefficient network pattern:
- Each application container had a port exposed on the host machine (e.g.,
9090). - NPM was configured to forward requests to the host machine’s IP address and
the app’s exposed port (e.g.,
http://192.168.1.100:9090).
This works, but it means traffic flows from NPM, out to my network router, and then back to the same host machine on a different port. This “hairpinning” is inefficient and creates unnecessary network traffic. It also exposes all my app ports to the local network, which is a security risk I wanted to avoid.
The Goal: Direct Container-to-Container Traffic
I knew it was possible for Docker containers on the same host to communicate directly over an internal network without exposing ports or sending traffic outward. My goal was to have NPM proxy traffic directly to other containers, keeping all communication within the host.
Unfortunately, the TrueNAS SCALE web UI for applications doesn’t provide an obvious way to connect a running container to another container’s network.
The Solution: The Docker CLI
The solution was to bypass the UI and use the command line. By SSHing into my TrueNAS server, I gained direct access to the Docker daemon and could manage the networking manually.
The key command is docker network connect. This command allows you to attach a
running container to an existing network.
My process was:
- Identify the containers and their networks.
- Connect the NPM container to the target application’s network.
- Inspect the network to find the application container’s new internal IP address.
- Update the NPM proxy host to use this internal IP instead of the host’s IP and exposed port.
This approach has one major caveat: it’s a manual process. If either the NPM container or the application container is recreated, the network connection is lost and must be re-established. While using container and network names instead of IDs makes this more resilient, it still requires manual intervention after an update.
Step-by-Step Commands
For anyone needing to do this, and for my own future reference, here are the steps to connect NPM to an application’s network.
- List all Docker networks to find the one used by NPM:
docker network ls
- List all running containers to identify the NPM container and the target app:
docker ps
- Connect the NPM container to the target app’s network:
docker network connect [npm_network] [app_container]
- Inspect the network to find the app container’s IP address:
docker inspect [npm_network]
- Use the found IP address in NPM to proxy traffic directly to the app.
Conclusion
While this manual CLI approach works, it’s not ideal for long-term management. My hope is that future versions of TrueNAS SCALE will enhance the UI to allow users to attach a container to an existing network or to create a shared, named network that multiple applications can join. This would streamline the process and align it more closely with standard Docker practices.
Update Dec 2025
So I converted all my apps to fully-custom apps that read a YAML configuration, and in that configuration I am able to add a network property and define the network for each app. The benefit is that this will persist across container and system restarts. I’m also pretty comfortable in YAML configuration files, so it’s not too bad.