Skip to main content
Skip to main content

Helm

Chart version 2.x

This page documents the v2.x subchart-based Helm chart. If you are still using the v1.x inline-template chart, see the v1.x Helm guide. For migration steps, see the Upgrade guide.

The Helm chart for ClickStack can be found here and is the recommended method for production deployments.

The v2.x chart uses a two-phase installation. Operators and CRDs are installed first via the clickstack-operators chart, followed by the main clickstack chart which creates operator-managed custom resources for ClickHouse, MongoDB, and the OpenTelemetry Collector.

By default, the Helm chart provisions all core components, including:

However, it can be easily customized to integrate with an existing ClickHouse deployment — for example, one hosted in ClickHouse Cloud.

The chart supports standard Kubernetes best practices, including:

  • Environment-specific configuration via values.yaml
  • Resource limits and pod-level scaling
  • TLS and ingress configuration
  • Secrets management and authentication setup
  • Additional manifests for deploying arbitrary Kubernetes objects (NetworkPolicy, HPA, ALB Ingress, etc.) alongside the chart

Suitable for

  • Proof of concepts
  • Production

Deployment steps


Prerequisites

  • Helm v3+
  • Kubernetes cluster (v1.20+ recommended)
  • kubectl configured to interact with your cluster

Add the ClickStack Helm repository

Add the ClickStack Helm repository:

helm repo add clickstack https://clickhouse.github.io/ClickStack-helm-charts
helm repo update

Install the operators

Install the operator chart first. This registers the CRDs required by the main chart:

helm install clickstack-operators clickstack/clickstack-operators

Wait for the operator pods to become ready before proceeding:

kubectl get pods -l app.kubernetes.io/instance=clickstack-operators

Install ClickStack

Once the operators are running, install the main chart:

helm install my-clickstack clickstack/clickstack

Verify the installation

Verify the installation:

kubectl get pods -l "app.kubernetes.io/name=clickstack"

When all pods are ready, proceed.

Forward ports

Port forwarding allows us to access and set up HyperDX. Users deploying to production should instead expose the service via an ingress or load balancer to ensure proper network access, TLS termination, and scalability. Port forwarding is best suited for local development or one-off administrative tasks, not long-term or high-availability environments.

kubectl port-forward \
  pod/$(kubectl get pod -l app.kubernetes.io/name=clickstack -o jsonpath='{.items[0].metadata.name}') \
  8080:3000
Production Ingress Setup

For production deployments, configure ingress with TLS instead of port forwarding. See the Ingress Configuration guide for detailed setup instructions.

Visit http://localhost:8080 to access the HyperDX UI.

Create a user, providing a username and password which meets the requirements.

On clicking Create, data sources will be created for the ClickHouse instance deployed with the Helm chart.

Overriding default connection

You can override the default connection to the integrated ClickHouse instance. For details, see "Using ClickHouse Cloud".

Customizing values (optional)

You can customize settings by using --set flags. For example:

helm install my-clickstack clickstack/clickstack --set key=value

Alternatively, edit the values.yaml. To retrieve the default values:

helm show values clickstack/clickstack > values.yaml

Example config:

hyperdx:
  frontendUrl: "https://hyperdx.example.com"

  deployment:
    replicas: 2
    resources:
      limits:
        cpu: "2"
        memory: 4Gi
      requests:
        cpu: 500m
        memory: 1Gi

  ingress:
    enabled: true
    host: hyperdx.example.com
    tls:
      enabled: true
      tlsSecretName: "hyperdx-tls"
helm install my-clickstack clickstack/clickstack -f values.yaml

Using secrets (optional)

The v2.x chart uses a unified secret (clickstack-secret) populated from hyperdx.secrets in your values. All sensitive environment variables — including ClickHouse passwords, MongoDB passwords, and the HyperDX API key — flow through this single secret.

To override secret values:

hyperdx:
  secrets:
    HYPERDX_API_KEY: "your-api-key"
    CLICKHOUSE_PASSWORD: "your-clickhouse-password"
    CLICKHOUSE_APP_PASSWORD: "your-app-password"
    MONGODB_PASSWORD: "your-mongodb-password"

For external secret management (e.g. using a secrets operator), you can reference a pre-existing Kubernetes secret:

hyperdx:
  useExistingConfigSecret: true
  existingConfigSecret: "my-external-secret"
  existingConfigConnectionsKey: "connections.json"
  existingConfigSourcesKey: "sources.json"
API Key Management

For detailed API key setup instructions including multiple configuration methods and pod restart procedures, see the API Key Setup guide.

Using ClickHouse Cloud

If using ClickHouse Cloud, disable the built-in ClickHouse instance and provide your Cloud credentials:

# values-clickhouse-cloud.yaml
clickhouse:
  enabled: false

hyperdx:
  secrets:
    CLICKHOUSE_PASSWORD: "your-cloud-password"
    CLICKHOUSE_APP_PASSWORD: "your-cloud-password"

  useExistingConfigSecret: true
  existingConfigSecret: "clickhouse-cloud-config"
  existingConfigConnectionsKey: "connections.json"
  existingConfigSourcesKey: "sources.json"

Create the connection secret separately:

cat <<EOF > connections.json
[
  {
    "name": "ClickHouse Cloud",
    "host": "https://your-cloud-instance.clickhouse.cloud",
    "port": 8443,
    "username": "default",
    "password": "your-cloud-password"
  }
]
EOF

kubectl create secret generic clickhouse-cloud-config \
  --from-file=connections.json=connections.json

rm connections.json
helm install my-clickstack clickstack/clickstack -f values-clickhouse-cloud.yaml
Advanced External Configurations

For production deployments with secret-based configuration, external OTEL collectors, or minimal setups, see the Deployment Options guide.

Production notes

By default, this chart installs ClickHouse, MongoDB, and the OTel collector. For production, it is recommended that you manage ClickHouse and the OTel collector separately.

To disable ClickHouse and the OTel collector:

clickhouse:
  enabled: false

otel-collector:
  enabled: false
Production Best Practices

For production deployments including high availability configuration, resource management, ingress/TLS setup, and cloud-specific configurations (GKE, EKS, AKS), see:

Task configuration

By default, there is one task in the chart setup as a cronjob, responsible for checking whether alerts should fire. In v2.x, task configuration has moved under hyperdx.tasks:

ParameterDescriptionDefault
hyperdx.tasks.enabledEnable/Disable cron tasks in the cluster. By default, the HyperDX image will run cron tasks in the process. Change to true if you'd rather use a separate cron task in the cluster.false
hyperdx.tasks.checkAlerts.scheduleCron schedule for the check-alerts task*/1 * * * *
hyperdx.tasks.checkAlerts.resourcesResource requests and limits for the check-alerts taskSee values.yaml

Upgrading the chart

To upgrade to a newer version:

helm upgrade my-clickstack clickstack/clickstack -f values.yaml

To check available chart versions:

helm search repo clickstack
Upgrading from v1.x

If you are upgrading from the v1.x inline-template chart, see the Upgrade guide for migration instructions. This is a breaking change — an in-place helm upgrade is not supported.

Uninstalling ClickStack

Uninstall in reverse order:

helm uninstall my-clickstack            # Remove app + CRs first
helm uninstall clickstack-operators     # Remove operators + CRDs

Note: PersistentVolumeClaims created by the MongoDB and ClickHouse operators are not removed by helm uninstall. This is by design to prevent accidental data loss. To clean up PVCs, refer to:

Troubleshooting

Checking logs

kubectl logs -l app.kubernetes.io/name=clickstack

Debugging a failed install

helm install my-clickstack clickstack/clickstack --debug --dry-run

Verifying deployment

kubectl get pods -l app.kubernetes.io/name=clickstack
Additional Troubleshooting Resources

For ingress-specific issues, TLS problems, or cloud deployment troubleshooting, see:

JSON type support

Beta feature. Learn more.
Beta Feature - not production ready

JSON type support in ClickStack is a beta feature. While the JSON type itself is production-ready in ClickHouse 25.3+, its integration within ClickStack is still under active development and may have limitations, change in the future, or contain bugs.

ClickStack has beta support for the JSON type from version 2.0.4.

For the benefits of this type see Benefits of the JSON type.

In order to enable support for the JSON type you must set the following environment variables:

  • OTEL_AGENT_FEATURE_GATE_ARG='--feature-gates=clickhouse.json' - enables support in the OTel collector, ensuring schemas are created using the JSON type.
  • BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true (ClickStack Open Source only) - enables support in the ClickStack UI application, allowing JSON data to be queried.

You can set these environment variables via hyperdx.config in your values.yaml:

hyperdx:
  config:
    BETA_CH_OTEL_JSON_SCHEMA_ENABLED: "true"
    OTEL_AGENT_FEATURE_GATE_ARG: "--feature-gates=clickhouse.json"

or via --set:

helm install my-clickstack clickstack/clickstack \
  --set "hyperdx.config.BETA_CH_OTEL_JSON_SCHEMA_ENABLED=true" \
  --set "hyperdx.config.OTEL_AGENT_FEATURE_GATE_ARG=--feature-gates=clickhouse.json"

Deployment guides

v1.x documentation

Additional resources