Helm configuration
This page documents the v2.x subchart-based Helm chart. If you are still using the v1.x inline-template chart, see Helm configuration (v1.x). For migration steps, see the Upgrade guide.
This guide covers configuration options for ClickStack Helm deployments. For basic installation, see the main Helm deployment guide.
Values organization
The v2.x chart organizes values by Kubernetes resource type under the hyperdx: block:
All environment variables flow through two static-named resources shared by the HyperDX Deployment and the OTEL Collector via envFrom:
clickstack-configConfigMap — populated fromhyperdx.configclickstack-secretSecret — populated fromhyperdx.secrets
There is no longer a separate OTEL-specific ConfigMap. Both workloads read from the same sources.
API key setup
After successfully deploying ClickStack, configure the API key to enable telemetry data collection:
- Access your HyperDX instance via the configured ingress or service endpoint
- Log into the HyperDX dashboard and navigate to Team settings to generate or retrieve your API key
- Update your deployment with the API key using one of the following methods:
Method 1: Update via Helm upgrade with values file
Add the API key to your values.yaml:
Then upgrade your deployment:
Method 2: Update via Helm upgrade with --set flag
Restart pods to apply changes
After updating the API key, restart the pods to pick up the new configuration:
The chart automatically creates a Kubernetes secret (clickstack-secret) with your configuration values. No additional secret configuration is needed unless you want to use an external secret.
Secret management
For handling sensitive data such as API keys or database credentials, the v2.x chart provides a unified clickstack-secret resource populated from hyperdx.secrets.
Default secret values
The chart ships with default values for all secrets. Override them in your values.yaml:
Using an external secret
For production deployments where you want to keep credentials separate from Helm values, use an external Kubernetes secret:
Then reference it in your values:
Ingress setup
To expose the HyperDX UI and API via a domain name, enable ingress in your values.yaml.
General ingress configuration
hyperdx.frontendUrl should match the ingress host and include the protocol (e.g., https://hyperdx.yourdomain.com). This ensures that all generated links, cookies, and redirects work correctly.
Enabling TLS (HTTPS)
To secure your deployment with HTTPS:
1. Create a TLS secret with your certificate and key:
2. Enable TLS in your ingress configuration:
Example ingress configuration
For reference, here's what the generated ingress resource looks like:
Common ingress pitfalls
Path and rewrite configuration:
- For Next.js and other SPAs, always use a regex path and rewrite annotation as shown above
- Don't use just
path: /without a rewrite, as this will break static asset serving
Mismatched frontendUrl and ingress.host:
- If these don't match, you may experience issues with cookies, redirects, and asset loading
TLS misconfiguration:
- Ensure your TLS secret is valid and referenced correctly in the ingress
- Browsers may block insecure content if you access the app over HTTP when TLS is enabled
Ingress controller version:
- Some features (like regex paths and rewrites) require recent versions of nginx ingress controller
- Check your version with:
OTEL collector ingress
If you need to expose your OTEL collector endpoints (for traces, metrics, logs) through ingress, use the additionalIngresses configuration. This is useful for sending telemetry data from outside the cluster or using a custom domain for the collector.
- This creates a separate ingress resource for the OTEL collector endpoints
- You can use a different domain, configure specific TLS settings, and apply custom annotations
- The regex path rule allows you to route all OTLP signals (traces, metrics, logs) through a single rule
If you don't need to expose the OTEL collector externally, you can skip this configuration. For most users, the general ingress setup is sufficient.
Alternatively, you can use additionalManifests to define fully custom ingress resources, such as an AWS ALB Ingress.
OTEL collector configuration
The OTEL Collector is deployed via the official OpenTelemetry Collector Helm chart as the otel-collector: subchart. Configure it directly under otel-collector: in your values:
Environment variables (ClickHouse endpoint, OpAMP URL, etc.) are shared via the unified clickstack-config ConfigMap and clickstack-secret Secret. The subchart's extraEnvsFrom is pre-wired to read from both.
See the OpenTelemetry Collector Helm chart for all available subchart values.
MongoDB configuration
MongoDB is managed by the MCK operator via a MongoDBCommunity custom resource. The CR spec is rendered verbatim from mongodb.spec:
The MongoDB password is set in hyperdx.secrets.MONGODB_PASSWORD. See the MCK documentation for all available CRD fields.
ClickHouse configuration
ClickHouse is managed by the ClickHouse Operator via ClickHouseCluster and KeeperCluster custom resources. Both CR specs are rendered verbatim from values:
ClickHouse user credentials are sourced from hyperdx.secrets (not clickhouse.config.users as in v1.x). See the ClickHouse Operator configuration guide for all available CRD fields.
Troubleshooting ingress
Check ingress resource:
Check ingress controller logs:
Test asset URLs:
Use curl to verify static assets are served as JS, not HTML:
Browser DevTools:
- Check the Network tab for 404s or assets returning HTML instead of JS
- Look for errors like
Unexpected token <in the console (indicates HTML returned for JS)
Check for path rewrites:
- Ensure the ingress isn't stripping or incorrectly rewriting asset paths
Clear browser and CDN cache:
- After changes, clear your browser cache and any CDN/proxy cache to avoid stale assets
Customizing values
You can customize settings by using --set flags:
Alternatively, create a custom values.yaml. To retrieve the default values:
Apply your custom values:
Next steps
- Deployment options - External systems and minimal deployments
- Cloud deployments - GKE, EKS, and AKS configurations
- Upgrade guide - Migrating from v1.x to v2.x
- Additional manifests - Custom Kubernetes objects
- Main Helm guide - Basic installation
- Configuration (v1.x) - v1.x configuration guide