Network Behavior
Network model and expected service-to-service traffic for MatrixEasyMode in the current operator-managed deployment.
This page explains the intended network behavior of the current MatrixEasyMode deployment model: public routes, internal service-to-service traffic, and the operator-managed ingress assumptions.
For installation steps, start with Get Started and the Installation Guide. For the broader system shape, see Architecture.
MatrixEasyMode is currently designed as operator-managed software.
That means the network model is meant to be understandable, not hidden. The deployment assumes a serious self-hoster or infrastructure-capable operator who wants to know:
- what is exposed publicly
- what stays inside Docker
- which services talk to which other services
- what MatrixEasyMode expects from Nginx Proxy Manager
- what traffic is part of normal runtime behavior
This page documents the intended network behavior of the current deployment pack.
Why this matters
For self-hosted software, trust is not only about features. It is also about clarity.
Operators should be able to reason about:
- public entrypoints
- internal service dependencies
- certificate and ingress assumptions
- the difference between startup-time behavior and normal runtime behavior
- the difference between baseline platform traffic and operator-triggered provisioning workflows
That is why MatrixEasyMode documents network behavior explicitly.
High-level network shape
The standard deployment includes four main services:
postgresnpm(Nginx Proxy Manager)apiweb
These services communicate in two different network contexts:
- public network behavior
- browser to public web route
- browser to public API route
- operator to Nginx Proxy Manager admin UI
- internal Docker network behavior
webtoapiapitopostgresapitonpm
A simple mental model looks like this:
Browser
-> https://admin.your-domain.com -> Nginx Proxy Manager -> web:3000
-> https://api.your-domain.com -> Nginx Proxy Manager -> api:7000
Operator
-> http://your-server:81 -> Nginx Proxy Manager admin UI
Inside Docker
web -> api
api -> postgres
api -> npmPublic entrypoints
In the normal deployment model, the public-facing routes are:
- MatrixEasyMode web frontend
- MatrixEasyMode API
- Nginx Proxy Manager admin UI
Typical examples are:
https://admin.your-domain.com
https://api.your-domain.com
http://your-server:81The first two are the main MatrixEasyMode platform routes. The third is the operator-facing Nginx Proxy Manager UI.
Internal service addresses
Inside Docker, services communicate using Docker service names rather than public hostnames.
Typical examples are:
http://web:3000
http://api:7000
http://npm:81/api
postgres:5432This is an important distinction.
Public traffic should use the public hostnames. Internal container-to-container traffic should use the internal service addresses.
That is why configuration values such as these matter:
NEXTAUTH_URL=https://admin.your-domain.com
NEXTAUTH_URL_INTERNAL=http://web:3000
NEXT_PUBLIC_API_URL=https://api.your-domain.com
API_URL=http://api:7000
NPM_BASEURL=http://npm:81/apiPublic versus internal behavior
A useful way to think about MatrixEasyMode is that it has two network identities at the same time.
Public identity
This is what browsers and external users see.
Examples:
https://admin.your-domain.comhttps://api.your-domain.com
This side depends on:
- DNS
- HTTPS
- reverse proxy routing
- certificate availability in Nginx Proxy Manager
Internal identity
This is what the containers use to talk to each other inside Docker.
Examples:
web:3000api:7000npm:81postgres:5432
This side depends on:
- Docker networking
- correct compose configuration
- correct environment values
- services being healthy and started in the right order
Both sides matter. A deployment can look healthy internally but still fail publicly if DNS, routing, or certificates are wrong.
Expected service-to-service traffic
The baseline network behavior is intentionally straightforward.
Browser to web frontend
Browsers connect to the public MatrixEasyMode web hostname over HTTPS.
Typical flow:
Browser -> NPM public host -> web:3000This is the main browser-facing application entrypoint.
Browser to API
Browsers also need to reach the public API hostname.
Typical flow:
Browser -> NPM public host -> api:7000This is why NEXT_PUBLIC_API_URL must match the public API route the browser is expected to use.
Web frontend to API
The web container communicates with the API container over the internal Docker network.
Typical flow:
web -> api:7000This is why internal configuration values such as API_URL=http://api:7000 matter.
API to PostgreSQL
The API communicates with PostgreSQL over the Docker network.
Typical flow:
api -> postgres:5432This is foundational runtime traffic. If it fails, the API will not operate correctly.
API to Nginx Proxy Manager
The API communicates with Nginx Proxy Manager over the Docker network using the NPM API.
Typical flow:
api -> npm:81/apiThis is part of the current platform design. MatrixEasyMode does not treat ingress as something completely external to the runtime model.
The API uses operator-provided NPM credentials to authenticate and manage or reconcile the main platform routes.
Nginx Proxy Manager behavior
Nginx Proxy Manager is central to the current deployment model.
From a network perspective, it plays two roles:
- public reverse proxy for the MatrixEasyMode web and API routes
- internal API target used by MatrixEasyMode for ingress bootstrap and reconciliation
That means NPM is visible in two different ways:
Public/operator access
The operator accesses the NPM admin UI on port 81.
Typical example:
http://your-server:81Internal API access
The MatrixEasyMode API accesses the NPM API internally.
Typical example:
http://npm:81/apiThis dual role is why NPM must be running before the MatrixEasyMode app layer starts.
Certificate assumptions
The current deployment model assumes that the required wildcard certificate already exists in Nginx Proxy Manager before the app layer starts.
Typical example:
*.your-domain.comThat certificate is then used for managed public host creation.
This matters because part of MatrixEasyMode's startup behavior is tied to ingress readiness. If the certificate does not exist, you can see route bootstrap failures even when containers themselves are running.
Startup-time network behavior
Some important network activity happens at startup rather than during ordinary steady-state use.
In particular, the MatrixEasyMode API may attempt to:
- authenticate against the NPM API
- inspect existing public route state
- create or reconcile the main web and API routes
- retry when NPM prerequisites are not yet satisfied
This is one reason the deployment model is staged:
- infrastructure first
- application second
The network behavior is simpler and more predictable when the prerequisites already exist before the app layer starts.
Normal steady-state runtime behavior
Once the platform is up and healthy, the expected baseline behavior is much calmer.
In normal runtime, the main flows are:
- browsers using the public web route
- browsers using the public API route
webtalking toapiapitalking topostgresapitalking tonpmwhen required for ingress-related behavior
The exact volume and frequency of traffic depend on how the platform is being used, but the dependency graph itself should not be surprising.
Operator-triggered versus baseline behavior
It is useful to separate baseline network behavior from operator-triggered behavior.
Baseline behavior
Baseline behavior includes the normal platform flows described above:
- browser to public web route
- browser to public API route
webtoapiapitopostgresapitonpm
Operator-triggered behavior
The broader deployment and platform model also includes features and workflows that may involve additional network activity when intentionally used by the operator.
Examples may include:
- image pulls during deployment or upgrade
- local build-time dependency fetches in local mode
- provisioning-related flows
- Matrix bootstrap or related platform workflows
- runtime actions that intentionally create or manage hosted components
Those should be understood as feature-driven or deployment-time traffic, not as mysterious hidden baseline runtime traffic.
Registry mode versus local mode
The overall network shape stays the same in both image modes.
Registry mode
In registry mode, the application images are pulled from a registry.
That means there is deployment-time network activity associated with pulling container images, but the steady-state runtime model remains the same once the services are running.
Local mode
In local mode, application images are built locally.
That may involve build-time network access depending on the Dockerfiles and package sources involved in the local build process.
Again, the key point is that the runtime service-to-service network model still remains the same:
webtoapiapitopostgresapitonpm- browser to the public MatrixEasyMode routes through NPM
Ports and exposure
In the standard deployment posture, operators should think about ports in two groups.
Public or host-bound ports
These are the ports that matter from outside the containers, depending on how the host is exposed in your environment.
Common examples include:
808144330007000
Not every environment will expose all of these directly in the same way, but they are part of the normal operator model described in the installation docs.
Internal service ports
Inside Docker, the important service ports are:
web:3000api:7000postgres:5432npm:81
These should be understood as internal connectivity targets even when the public-facing routes are entirely HTTPS-based.
What this document is not claiming
This page is intended to document the current deployment model clearly.
It is not trying to make vague marketing claims such as “nothing ever talks to anything unexpected” without context.
A more accurate operator reading is:
- the core runtime model is explicit
- the main service-to-service flows are understandable
- ingress and certificate assumptions are operator-visible
- deployment-time and feature-driven traffic should be distinguished from steady-state application traffic
- operators should still verify behavior in their own environment when they need stronger assurance
That is the correct self-hosting posture.
How operators should verify network behavior
If you want to validate the behavior in your own environment, a good checklist is:
- review
.env - confirm the public hostnames you intended
- confirm the internal service URLs
- inspect NPM routes and certificate state
- review container logs
- verify which services are reachable only inside Docker versus publicly
Useful commands include:
./stack.sh status
./stack.sh logs infra
./stack.sh logs app
./stack.sh logs api
./stack.sh logs webFor advanced operators using raw Docker Compose:
docker compose --profile app ps
docker compose --profile app logs -f api webCommon mistakes that look like network problems
A few issues are especially common:
- public URLs in
.envdo not match the real DNS names - NPM is running, but the wildcard certificate does not exist
- the app layer was started before ingress prerequisites were ready
- local mode images were not built, so the app never really started
- internal service addresses were changed casually and no longer match the compose model
These often appear as “network issues” when the real cause is configuration drift or startup order.
How to think about MatrixEasyMode in one sentence
A practical mental model is:
MatrixEasyMode uses operator-managed public HTTPS routes through Nginx Proxy Manager, while the web, API, NPM, and PostgreSQL services communicate internally over Docker networking with clearly separated public and internal addresses.
Related documentation
Final note
Network behavior is part of MatrixEasyMode's trust posture.
The goal is not to pretend the platform has no infrastructure reality. The goal is to make that reality legible:
- what is public
- what is internal
- what depends on what
- what happens at startup
- what happens in normal runtime
- what belongs to the operator
That clarity is intentional.
