nautobot-worker
Site-level Nautobot Celery workers that connect to the global Nautobot
database and Redis. This component deploys only the Celery worker
portion of the Nautobot Helm chart on site clusters, allowing sites to
process background tasks locally without running the full Nautobot web
application. The web server, Redis, and PostgreSQL all remain on the
global cluster -- site workers connect back to those shared services
over the network.
The matching Nautobot JobQueue records and Job assignments are
reconciled by the
nautobot-job-queues component.
Deployment Scope
- Cluster scope: site
- Values key:
site.nautobot_worker - ArgoCD Application template:
charts/argocd-understack/templates/application-nautobot-worker.yaml
How ArgoCD Builds It
- ArgoCD renders Helm chart
nautobotand Kustomize pathcomponents/nautobot-worker. - The deploy repo contributes
values.yamlfor this component. - The deploy repo overlay directory for this component is applied as a second source, so
kustomization.yamland any referenced manifests are part of the final Application.
How to Enable
Enable this component in your site deployment values file:
Configuration Architecture
The worker uses the same Helm chart fileParameters mechanism as the
global Nautobot deployment. The default config path is
$understack/components/nautobot/nautobot_config.py, but site
deployments can override it with site.nautobot_worker.nautobot_config.
A site deployment can point this value at a shared deploy-repo config:
site:
nautobot_worker:
nautobot_config: '$deploy/apps/nautobot-config/nautobot_config.py'
Use the same deploy-specific config for global Nautobot and site workers
when they must share mTLS, plugin, SSO, and production-hardening
behavior.
Architecture
Site workers connect to the global cluster's PostgreSQL (CNPG) and Redis
through the Envoy Gateway. Both connections use mutual TLS (mTLS) with
TLS passthrough at the gateway, so the cryptographic handshake happens
directly between the worker pod and the database/Redis server.
Site Cluster Global Cluster
+------------------+ +---------------------------+
| Worker Pod | TLS+ClientCert | Envoy Gateway |
| - celery | ---------------> | port 5432 (passthrough) | --> CNPG PostgreSQL
| - mtls certs | ---------------> | port 6379 (passthrough) | --> Redis
+------------------+ +---------------------------+
The worker pods mount a client certificate (issued by a dedicated
internal CA via cert-manager) and present it during the TLS handshake.
See Certificate Infrastructure for
details on the CA hierarchy and how certificates are provisioned.
PostgreSQL and Redis on the global cluster verify the client certificate
against the same CA before accepting the connection.
Why mTLS?
Site workers run on remote clusters and connect to the global database
and Redis over the network. Password-only authentication is insufficient
for cross-cluster connections -- if a credential leaks, any host with
network access could connect to the production database. mTLS ensures
that even with a leaked password, connections without a valid client
certificate are rejected. Traffic is encrypted end-to-end between the
worker pod and the server.
Plugin Loading
Deployment-specific plugin configuration can live in the shared deploy
nautobot_config.py, with credentials supplied by nautobot-custom-env.
Site workers use the same config as global Nautobot, so plugin changes
belong in the shared deploy config, not in worker-only values. For
details, see the
Nautobot Plugin Loading
operator guide.
Connection Security
PostgreSQL (CNPG)
The global CNPG cluster is configured with:
spec.certificates.serverTLSSecretandspec.certificates.serverCASecret
for server-side TLS.spec.certificates.clientCASecretset to the CA public cert secret
(mtls-ca-cert). CNPG uses this to populate PostgreSQL's
ssl_ca_file, which is what PostgreSQL checks when verifying client
certificates duringpg_hba certauthentication. The secret only
needsca.crt(the root CA public cert).spec.certificates.replicationTLSSecretset to a cert-manager
Certificate (nautobot-cluster-replication) with
commonName: streaming_replica. This provides the client cert CNPG
uses for streaming replication between PostgreSQL instances. When
replicationTLSSecretis provided, CNPG does not need the CA private
key inclientCASecret, which is why we can usemtls-ca-cert
(which only hasca.crt) instead ofmtls-ca-key-pair.pg_hbarules that requirehostssl ... certfor all connections,
enforcing client certificate authentication over TLS
Both global pods and site workers connect with sslmode=verify-ca,
presenting their client certificate, key, and the CA root cert via
Django's DATABASES OPTIONS.
The nautobot_config.py SSL logic is conditional on the
NAUTOBOT_DB_SSLMODE environment variable:
verify-caorverify-full: reads cert paths from environment
variables (defaults to/etc/nautobot/mtls/) and sets full mTLS
options onDATABASES["default"]["OPTIONS"]. Used by both global
pods and site workers.require: setssslmode=requireonly -- encrypts the connection
without presenting a client certificate or verifying the server CA.- Unset or empty: no SSL options are applied and pods connect with
password-only auth over plain TCP.
pg_hba Rule
The CNPG cluster uses a single pg_hba rule:
hostssl all all 0.0.0.0/0 cert-- all connections must use TLS
and present a valid client certificate. The certificate CN maps to
the PostgreSQL user (must beapp).
Redis
The global Redis mTLS configuration is described in the
global nautobot deploy guide. Site workers
use the same auto-detection mechanism -- when the mTLS cert volume is
mounted, Redis SSL is configured automatically.
Envoy Gateway
Both PostgreSQL (port 5432) and Redis (port 6379) use routes.tls
entries with TLS passthrough mode. The gateway routes traffic based on
SNI hostname without terminating TLS, preserving end-to-end mTLS.
Firewall Requirements
Site workers reach the global PostgreSQL and Redis services through the
Envoy Gateway LoadBalancer address. Because these are separate
routes.tls listeners, HTTPS access to the Nautobot web endpoint is not
enough. The network path must allow the database and Redis listener
ports as well.
For each site, request or configure firewall/security policy with:
| Field | Value |
|---|---|
| Source | Site worker egress CIDRs, such as node CIDRs, pod CIDRs, or NAT ranges |
| Destination | Envoy Gateway LoadBalancer/VIP for the global cluster |
| Services | TCP/5432 and TCP/6379 |
| Protocol handling | TLS/SSL passthrough if the firewall requires an application/protocol match |
| Action | Allow |
The Envoy config should have TLS routes similar to:
routes:
tls:
- name: nautobot-db
fqdn: nautobot-db.<env>.example.com
gatewayPort: 5432
namespace: nautobot
service:
name: nautobot-cluster-rw
port: 5432
- name: nautobot-redis
fqdn: nautobot-redis.<env>.example.com
gatewayPort: 6379
namespace: nautobot
service:
name: nautobot-redis-master
port: 6379
Both FQDNs must resolve to the Envoy Gateway LoadBalancer/VIP. From a
site worker pod, verify routing before debugging mTLS:
kubectl exec -n nautobot deploy/nautobot-worker-celery-site-dc -- sh -lc \
'nc -vz nautobot-db.<env>.example.com 5432 && nc -vz nautobot-redis.<env>.example.com 6379'
Certificate Infrastructure
Global Cluster
The global cluster hosts the mTLS CA hierarchy described in the
global nautobot deploy guide.
Site Clusters
Client certificates are issued on the global cluster by cert-manager
and distributed to site clusters through your external secrets provider.
The CA private key never leaves the global cluster -- a compromised
site cannot forge certificates for other sites.
The site worker mounts only the site-local Secret named
nautobot-mtls-client. ExternalSecret creates that Secret from the
secrets provider. The provider data should come from the per-site source
Secret issued on the global cluster, named
nautobot-mtls-client-<site>.
Each site needs two credentials from the secrets provider:
| Credential | Content | Scope |
|---|---|---|
| Client cert+key | The issued tls.crt and tls.key for this site |
Per-site |
| CA public cert | The ca.crt from the mTLS CA |
Shared across all sites |
The ExternalSecret on the site cluster combines these into a single
nautobot-mtls-client secret (type kubernetes.io/tls) with tls.crt,
tls.key, and ca.crt. This secret is mounted into worker pods at
/etc/nautobot/mtls/. The shared nautobot_config.py uses those stable
file paths for both global pods and site workers.
Note: if your secrets provider stores PEM data with \r\n line endings
or concatenates multiple PEM blocks in a single field, use the
filterPEM
template function to extract specific block types. filterPEM handles
carriage-return stripping automatically.
Adding a New Site
This section walks through configuring nautobot-worker for a new site
cluster. All files go in <site-name>/nautobot-worker/ in the deploy
repo.
Prerequisites
Before starting, ensure the global cluster already has:
- The mTLS CA hierarchy deployed (issuers, root CA, CA issuer)
- Server TLS certificates for PostgreSQL and Redis
- A global
nautobot-mtls-clientcertificate (for RedisauthClients) - CNPG configured with
serverTLSSecret,serverCASecret,clientCASecret, andpg_hba - Redis TLS enabled with
authClients: true - Envoy Gateway TLS passthrough routes on ports 5432 and 6379
- Firewall/security policy allowing TCP/5432 and TCP/6379 from the site
worker egress CIDRs to the Envoy Gateway LoadBalancer/VIP
You also need the pre-issued client certificate stored in your external
secrets provider (see Step 1).
Step 1: Issue the client certificate on the global cluster
Create a cert-manager Certificate resource on the global cluster for
this site. The commonName must match the PostgreSQL database user
(typically app) because pg_hba cert maps the certificate CN to the
DB user.
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: nautobot-mtls-client-<site>
namespace: nautobot
spec:
secretName: nautobot-mtls-client-<site>
duration: 26280h # 3 years
renewBefore: 720h # 30 days
commonName: app
usages:
- client auth
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: mtls-ca-issuer
kind: Issuer
Add it to the global nautobot kustomization. After ArgoCD syncs,
cert-manager issues the certificate into a Kubernetes secret.
Then extract the cert material and upload it to your secrets provider
as two separate credentials:
# Extract the client cert + key (per-site credential)
kubectl get secret nautobot-mtls-client-<site> -n nautobot \
-o jsonpath='{.data.tls\.crt}' | base64 -d > /tmp/tls.crt
kubectl get secret nautobot-mtls-client-<site> -n nautobot \
-o jsonpath='{.data.tls\.key}' | base64 -d > /tmp/tls.key
# Upload to your secrets provider as a single credential with
# the cert and key concatenated in one field.
# Extract the CA public cert (shared across all sites, one-time)
kubectl get secret mtls-ca-cert -n nautobot \
-o jsonpath='{.data.ca\.crt}' | base64 -d > /tmp/ca.crt
# Upload to your secrets provider as a separate credential.
# This only needs to be done once -- all sites share the same CA cert.
The CA private key stays in the mtls-ca-key-pair secret on the global
cluster and is never extracted or distributed.
Step 2: Create the site directory
Step 3: Create ExternalSecrets for credentials
Create ExternalSecret resources that pull credentials from your secrets
provider into the nautobot namespace. A deployment-specific config
that reads additional environment variables also needs
nautobot-custom-env:
| ExternalSecret | Target Secret | Purpose |
|---|---|---|
externalsecret-nautobot-django.yaml |
nautobot-django |
Django SECRET_KEY -- must match the global instance |
externalsecret-nautobot-db.yaml |
nautobot-db |
CNPG app user password (satisfies Helm chart requirement) |
externalsecret-nautobot-worker-redis.yaml |
nautobot-worker-redis |
Redis password |
externalsecret-dockerconfigjson-github-com.yaml |
dockerconfigjson-github-com |
Container registry credentials |
externalsecret-nautobot-mtls-client.yaml |
nautobot-mtls-client |
mTLS client cert + CA cert (two credentials combined) |
externalsecret-nautobot-custom-env.yaml |
nautobot-custom-env |
Deployment-specific integration credentials or runtime settings |
The mTLS ExternalSecret pulls from two separate credentials in your
secrets provider -- the per-site client cert+key and the shared CA
public cert -- and combines them into a single kubernetes.io/tls
secret with tls.crt, tls.key, and ca.crt.
If both credentials have the same field name (e.g. password), use
dataFrom with rewrite to prefix the keys and avoid collision:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: nautobot-mtls-client
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: <your-store>
target:
creationPolicy: Owner
deletionPolicy: Retain
template:
engineVersion: v2
type: kubernetes.io/tls
data:
tls.crt: '{{ .client_password | filterPEM "CERTIFICATE" }}'
tls.key: '{{ .client_password | filterPEM "EC PRIVATE KEY" }}'
ca.crt: '{{ .ca_password | filterPEM "CERTIFICATE" }}'
dataFrom:
- extract:
key: "<client-cert-credential-id>"
rewrite:
- regexp:
source: "(.*)"
target: "client_$1"
- extract:
key: "<ca-cert-credential-id>"
rewrite:
- regexp:
source: "(.*)"
target: "ca_$1"
The filterPEM
function extracts PEM blocks by type and strips carriage returns
automatically. Pass the PEM block type without the BEGIN/END
markers (e.g. "CERTIFICATE", "EC PRIVATE KEY", "PRIVATE KEY").
Step 4: Create the kustomization
Create kustomization.yaml listing all resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- externalsecret-nautobot-django.yaml
- externalsecret-nautobot-db.yaml
- externalsecret-nautobot-worker-redis.yaml
- externalsecret-dockerconfigjson-github-com.yaml
- externalsecret-nautobot-mtls-client.yaml
- externalsecret-nautobot-custom-env.yaml
Step 5: Create the values file
Create values.yaml with the site-specific overrides.
The Celery worker name and queue are rendered by ArgoCD Application from the
understack.rackspace.com/site app label.
nautobot:
db:
host: "nautobot-db.<env>.example.com"
redis:
host: "nautobot-redis.<env>.example.com"
ssl: true
image:
registry: "ghcr.io"
repository: "<org>/<nautobot-image>"
tag: "latest"
pullPolicy: "Always"
pullSecrets:
- dockerconfigjson-github-com
celery:
extraEnvVars:
- name: NAUTOBOT_CONFIG
value: /opt/nautobot/nautobot_config.py
- name: NAUTOBOT_DB_SSLMODE
value: verify-ca
- name: NAUTOBOT_DB_SSLNEGOTIATION
value: direct
- name: NAUTOBOT_REDIS_SSL_CERT_REQS
value: required
- name: NAUTOBOT_REDIS_SSL_CA_CERTS
value: /etc/nautobot/mtls/ca.crt
- name: NAUTOBOT_REDIS_SSL_CERTFILE
value: /etc/nautobot/mtls/tls.crt
- name: NAUTOBOT_REDIS_SSL_KEYFILE
value: /etc/nautobot/mtls/tls.key
extraVolumes:
- name: mtls-certs
secret:
secretName: nautobot-mtls-client
defaultMode: 256
extraVolumeMounts:
- name: mtls-certs
mountPath: /etc/nautobot/mtls
readOnly: true
Step 6: Enable in deploy.yaml
Add nautobot_worker to the site's deploy.yaml:
site:
nautobot_worker:
enabled: true
nautobot_config: '$deploy/apps/nautobot-config/nautobot_config.py'
Step 7: Verify
After ArgoCD syncs, verify the worker is running and connected:
# Check the client cert secret was pulled from the secrets provider
kubectl get secret nautobot-mtls-client -n nautobot
# Check the worker pod is running
kubectl get pods -n nautobot -l app.kubernetes.io/component=nautobot-celery
# Check worker logs for successful DB/Redis connections
kubectl logs -n nautobot -l app.kubernetes.io/component=nautobot-celery --tail=50
Final directory structure
<site-name>/nautobot-worker/
externalsecret-dockerconfigjson-github-com.yaml
externalsecret-nautobot-db.yaml
externalsecret-nautobot-django.yaml
externalsecret-nautobot-mtls-client.yaml
externalsecret-nautobot-custom-env.yaml
externalsecret-nautobot-worker-redis.yaml
kustomization.yaml
values.yaml
Certificate Renewal
For details on how mTLS client certificates are renewed and distributed
to site clusters, see the
mTLS Certificate Renewal
operator guide.
Environment Variable Reference
| Variable | Where Set | Purpose |
|---|---|---|
NAUTOBOT_DB_SSLMODE |
Both global and site values | Controls PostgreSQL SSL mode. Set to verify-ca for mTLS on all pods. |
NAUTOBOT_DB_SSLNEGOTIATION |
Optional in global and site values | If set to direct, starts the TLS handshake immediately after TCP connect. Requires PostgreSQL/libpq 17+ and NAUTOBOT_DB_SSLMODE=require or stronger. |
NAUTOBOT_DB_SSLCERT |
Optional override | Path to client cert for PG (default: /etc/nautobot/mtls/tls.crt) |
NAUTOBOT_DB_SSLKEY |
Optional override | Path to client key for PG (default: /etc/nautobot/mtls/tls.key) |
NAUTOBOT_DB_SSLROOTCERT |
Optional override | Path to CA cert for PG (default: /etc/nautobot/mtls/ca.crt) |
NAUTOBOT_REDIS_SSL_CERT_REQS |
Site worker values | Set to required to enforce Redis server cert verification |
NAUTOBOT_REDIS_SSL_CA_CERTS |
Site worker values | Path to CA cert for Redis |
NAUTOBOT_REDIS_SSL_CERTFILE |
Site worker values | Path to client cert for Redis |
NAUTOBOT_REDIS_SSL_KEYFILE |
Site worker values | Path to client key for Redis |
SSL_CERT_FILE |
Optional site value | System-wide CA bundle override for outbound HTTPS |
REQUESTS_CA_BUNDLE |
Optional site value | Python requests library CA bundle override |
NAUTOBOT_CONFIG |
Both global and site | Path to nautobot_config.py |
UNDERSTACK_PARTITION |
cluster-data ConfigMap (patched by ArgoCD from appLabels) |
Site partition identifier used by computed fields (e.g. device URN generation). Exposed as a Django setting. |
UNDERSTACK_SITE |
cluster-data ConfigMap (patched by ArgoCD from appLabels) |
Site identifier available to worker pods. The same app label is used by ArgoCD to render the worker name and Celery queue. |
| Deployment-specific integration variables | nautobot-custom-env secret |
Extra credentials and runtime settings consumed by the selected nautobot_config.py. |
Design Decisions
-
The cert-manager CA hierarchy (self-signed bootstrap -> root CA ->
CA issuer) handles issuance and renewal on the global cluster. Site
clusters receive the issued client certificate through their external
secrets provider. -
CNPG's native TLS support (
serverTLSSecret,serverCASecret,
clientCASecret,replicationTLSSecret) integrates directly with
cert-manager secrets. No sidecar proxies or custom TLS termination
needed.clientCASecretpopulates PostgreSQL'sssl_ca_filefor
client cert verification duringpg_hba certauth. It points to the
CA public cert secret (mtls-ca-cert).replicationTLSSecret
provides the streaming replication client cert so CNPG does not need
the CA private key inclientCASecret. -
The
routes.tlstype in the Envoy Gateway template uses a
gatewayPortfield to support non-443 ports for TLS passthrough.
PostgreSQL (5432) and Redis (6379) both use this route type. -
The
pg_hba certmethod with CN-to-user mapping means the client
certificate CN (e.g.app) maps directly to the PostgreSQL user, so
no additional user mapping configuration is needed. -
Client certificates are issued on the global cluster by cert-manager
and distributed to site clusters via the external secrets provider.
The CA private key never leaves the global cluster, so a compromised
site cannot forge certificates for other sites. -
The
nautobot_config.pySSL logic is conditional on
NAUTOBOT_DB_SSLMODE, so the same config file works for both global
pods and site workers. All pods setverify-cato present client
certificates forpg_hba certauthentication. If a deployment sets
NAUTOBOT_DB_SSLNEGOTIATION=direct, keep that setting paired with
PostgreSQL/libpq 17 or newer. -
The Redis mTLS logic in
nautobot_config.pyauto-detects the CA cert
file at the default mount path. If the cert volume is mounted, Redis
mTLS is configured automatically.
Known Gotchas
-
clientCASecret is required for client cert verification. CNPG
usesclientCASecretto populate PostgreSQL'sssl_ca_file, which
is what verifies client certificates duringpg_hba certauth.
serverCASecretonly provides the CA cert sent to clients for server
verification -- it does NOT populatessl_ca_file. Without
clientCASecret, CNPG auto-generates its own internal replication CA
and uses that forssl_ca_file, causingtlsv1 alert unknown ca
errors for external client certs. When providingclientCASecret,
you must also setreplicationTLSSecretso CNPG does not need the
CA private key (ca.key) in theclientCASecretsecret. -
SSL config must be conditional. The mTLS config in
nautobot_config.pyis gated on theNAUTOBOT_DB_SSLMODEenv var.
Both global pods and site workers must set it toverify-ca. If the
env var is unset, no SSL options are applied and the connection will
be rejected by thehostssl ... certpg_hba rule. -
Deploy-specific configs may expect nautobot-custom-env. If the
selected deploy config reads extra environment variables from the
nautobot-custom-envsecret, keep the ExternalSecret in the site
worker kustomization. -
mtls-ca-cert secret contains a private key. cert-manager
Certificate resources always producetls.crt,tls.key, and
ca.crt. CNPG only readsca.crtfrom the referenced secret, so
the extra fields are harmless but not ideal. A future improvement
could use cert-managertrust-managerBundle to distribute only the
CA cert. -
ca.crt must be the CA cert, not the client cert. The
ca.crt
field in thenautobot-mtls-clientsecret must contain the mTLS CA
certificate (CN=understack-mtls-ca), not the client certificate.
Ifca.crtcontains the client cert, the worker will fail with
[SSL: CERTIFICATE_VERIFY_FAILED] self-signed certificate in certificate chainbecause it can't verify the server's cert chain.
The CA cert credential in your secrets provider is shared across all
sites and only needs to be created once. -
PEM data with carriage returns. Some secrets providers store text
with\r\nline endings. PEM certificates with\rcharacters will
fail OpenSSL parsing with[SSL] PEM lib. Use the
filterPEM
template function to extract PEM blocks by type -- it handles
carriage-return stripping automatically. Avoid manualregexFind+
replace "\r" ""patterns. -
ExternalSecret format depends on your secrets provider. The
ExternalSecret for the mTLS client cert on site clusters must produce
akubernetes.io/tlssecret withtls.crt,tls.key, andca.crt.
How you template this depends on how your secrets provider stores the
credential. -
Redis authClients affects all connections. Redis
authClients: truerequires ALL clients (including global Nautobot
pods) to present client certificates. The global Nautobot values must
mount the mTLS client cert into both the web server and celery pods,
not just site workers. -
pg_hba uses cert auth for all connections. The single
hostssl all all 0.0.0.0/0 certrule requires every connection --
local and remote -- to present a valid client certificate over TLS.
All pods (global and site workers) must haveNAUTOBOT_DB_SSLMODE
set toverify-caand the mTLS client cert mounted. -
defaultMode 256 vs 0400. The
defaultMode: 256(octal 0400) on
the cert secret volume mount is correct but easy to get wrong. YAML
interprets0400as octal (decimal 256) -- writing256explicitly
avoids ambiguity. -
Client cert CN must match the DB user. When using
pg_hba cert
auth, PostgreSQL maps the client certificate CN to the database user.
The site worker client cert must usecommonName: appto match the
CNPG app user. If the CN doesn't match, the connection is rejected
even with a valid cert.
Troubleshooting
Worker pod fails to start with FileNotFoundError
The nautobot_config.py validates that cert files exist when
NAUTOBOT_DB_SSLMODE is verify-ca or verify-full. If the
nautobot-mtls-client secret doesn't exist or the volume mount is
misconfigured, the pod will crash with:
FileNotFoundError: SSL certificate file required by NAUTOBOT_DB_SSLCERT not found: /etc/nautobot/mtls/tls.crt
Check that:
- The
nautobot-mtls-clientsecret exists on the site cluster:
kubectl get secret nautobot-mtls-client -n nautobot - The ExternalSecret is syncing successfully:
kubectl get externalsecret nautobot-mtls-client -n nautobot - The secret contains
tls.crt,tls.key, andca.crtkeys - On the global cluster, verify the source certificate is issued:
kubectl get certificate -n nautobot | grep mtls-client
PostgreSQL rejects connection with "tlsv1 alert unknown ca"
PostgreSQL's ssl_ca_file does not contain the CA that signed the
client certificate. This is a TLS-level rejection that happens before
pg_hba rules are evaluated.
The most common cause is that clientCASecret is not set on the CNPG
Cluster resource. Without it, CNPG auto-generates its own internal
replication CA and uses that for ssl_ca_file. External client certs
signed by the mTLS CA will be rejected.
Verify what CA PostgreSQL is actually using:
kubectl exec -n nautobot nautobot-cluster-1 -c postgres -- \
openssl x509 -noout -subject -in /controller/certificates/client-ca.crt
If it shows CN=nautobot-cluster (CNPG's internal CA) instead of
CN=understack-mtls-ca, set clientCASecret and
replicationTLSSecret on the CNPG Cluster. See the
PostgreSQL mTLS
operator guide for details.
PostgreSQL rejects connection with "certificate verify failed"
The client cert is not signed by the CA that CNPG trusts. Verify the
CA chain:
# On the site cluster, check the client cert's issuer
kubectl get secret nautobot-mtls-client -n nautobot \
-o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -issuer
# On the global cluster, check the CA cert that CNPG uses
kubectl get secret mtls-ca-cert -n nautobot \
-o jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -subject
The issuer of the client cert should match the subject of the CA cert.
PostgreSQL rejects with "no pg_hba.conf entry"
The connection doesn't match any pg_hba rule. Common causes:
- The client is connecting without TLS but the only matching rule
requireshostssl - The client cert CN doesn't match the DB user (for
certauth) - The source IP doesn't match any rule's CIDR
Redis connection refused with "certificate verify failed"
The ca.crt mounted in the pod is not the CA that signed the Redis
server certificate. Verify:
# Should show CN=understack-mtls-ca (the CA), NOT CN=app (the client cert)
kubectl get secret nautobot-mtls-client -n nautobot \
-o jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -noout -subject
If it shows the client cert CN, the CA cert credential in your secrets
provider has the wrong content. Update it with the actual CA certificate
from the global cluster's mtls-ca-cert secret.
Redis connection refused with TLS error
If Redis has authClients: true and the connecting pod doesn't present
a client cert, the TLS handshake fails. Ensure the pod has the mTLS
cert volume mounted and the Redis SSL env vars are set.
Envoy Gateway not routing traffic
If the gateway listener doesn't appear or traffic isn't reaching the
backend:
# Check gateway status
kubectl get gateway -n envoy-gateway -o yaml
# Check TLSRoute status
kubectl get tlsroute -n nautobot -o yaml
Verify the fqdn in the TLS route matches the SNI hostname the client
is connecting to. For PostgreSQL, the nautobot.db.host in the worker
values must match the fqdn in the envoy-configs route.