Orchestration with Kubernetes (Insecure)

On this page Carat arrow pointing down

On top of CockroachDB's built-in automation, you can use a third-party orchestration system to simplify and automate even more of your operations, from deployment to scaling to overall cluster management.

This page demonstrates a basic integration with the open-source Kubernetes orchestration system. Using either the CockroachDB Helm chart or a few configuration files, you'll quickly create a 3-node local cluster. You'll run some SQL commands against the cluster and then simulate node failure, watching how Kubernetes auto-restarts without the need for any manual intervention. You'll then scale the cluster with a single command before shutting the cluster down, again with a single command.

Note:

To orchestrate a physically distributed cluster in production, see Orchestrated Deployments. To deploy a 30-day free CockroachDB Dedicated cluster instead of running CockroachDB yourself, see the Quickstart.

Best practices

Kubernetes version

To deploy CockroachDB v24.3, Kubernetes 1.18 or higher is required. Cockroach Labs strongly recommends that you use a Kubernetes version that is eligible for patch support by the Kubernetes project.

Kubernetes Operator

  • The CockroachDB Kubernetes Operator currently deploys clusters in a single region. For multi-region deployments using manual configs, see Orchestrate CockroachDB Across Multiple Kubernetes Clusters.

  • Using the Operator, you can give a new cluster an arbitrary number of labels. However, a cluster's labels cannot be modified after it is deployed. To track the status of this limitation, refer to #993 in the Operator project's issue tracker.

Helm version

The CockroachDB Helm chart requires Helm 3.0 or higher. If you attempt to use an incompatible Helm version, an error like the following occurs:

Error: UPGRADE FAILED: template: cockroachdb/templates/tests/client.yaml:6:14: executing "cockroachdb/templates/tests/client.yaml" at <.Values.networkPolicy.enabled>: nil pointer evaluating interface {}.enabled

The CockroachDB Helm chart is currently not under active development, and no new features are planned. However, Cockroach Labs remains committed to fully supporting the Helm chart by addressing defects, providing security patches, and addressing breaking changes due to deprecations in Kubernetes APIs.

A deprecation notice for the Helm chart will be provided to customers a minimum of 6 months in advance of actual deprecation.

Network

Service Name Indication (SNI) is an extension to the TLS protocol which allows a client to indicate which hostname it is attempting to connect to at the start of the TCP handshake process. The server can present multiple certificates on the same IP address and TCP port number, and one server can serve multiple secure websites or API services even if they use different certificates.

Due to its order of operations, the PostgreSQL wire protocol's implementation of TLS is not compatible with SNI-based routing in the Kubernetes ingress controller. Instead, use a TCP load balancer for CockroachDB that is not shared with other services.

Resources

When starting Kubernetes, select machines with at least 4 vCPUs and 16 GiB of memory, and provision at least 2 vCPUs and 8 Gi of memory to CockroachDB per pod. These minimum settings are used by default in this deployment guide, and are appropriate for testing purposes only. On a production deployment, you should adjust the resource settings for your workload. For details, see Resource management.

Storage

Kubernetes deployments use external persistent volumes that are often replicated by the provider. CockroachDB replicates data automatically, and this redundant layer of replication can impact performance. Using local volumes may improve performance.

Before you begin

Before getting started, it's helpful to review some Kubernetes-specific terminology:

Feature Description
minikube A tool commonly used to run a Kubernetes cluster on a local workstation.
pod A pod is a group of one of more containers managed by Kubernetes. In this tutorial, all pods run on your local workstation. Each pod contains a single container that runs a single-node CockroachDB cluster. You'll start with 3 pods and grow to 4.
StatefulSet A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
persistent volume A persistent volume is storage mounted in a pod and available to its containers. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using minikube, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
persistent volume claim When e pod is created, it requests a persistent volume claim to claim durable storage for its node.

Step 1. Start Kubernetes

  1. Follow the Minikube documentation to install the latest version of minikube, a hypervisor, and the kubectl command-line tool.

  2. Start a local Kubernetes cluster:

    icon/buttons/copy
    minikube start
    

Step 2. Start CockroachDB

To start your CockroachDB cluster, you can either use our StatefulSet configuration and related files directly, or you can use the Helm package manager for Kubernetes to simplify the process.

  1. From your local workstation, use our cockroachdb-statefulset.yaml file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:

    icon/buttons/copy
    $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
    
    service/cockroachdb-public created
    service/cockroachdb created
    poddisruptionbudget.policy/cockroachdb-budget created
    statefulset.apps/cockroachdb created
    
  2. Confirm that three pods are Running successfully. Note that they will not be considered Ready until after the cluster has been initialized:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME            READY     STATUS    RESTARTS   AGE
    cockroachdb-0   0/1       Running   0          2m
    cockroachdb-1   0/1       Running   0          2m
    cockroachdb-2   0/1       Running   0          2m
    
  3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:

    icon/buttons/copy
    $ kubectl get pv
    
    NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                           REASON    AGE
    pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002   1Gi        RWO           Delete          Bound     default/datadir-cockroachdb-0             26s
    pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002   1Gi        RWO           Delete          Bound     default/datadir-cockroachdb-1             27s
    pvc-5315efda-8bd5-11e6-a4f4-42010a800002   1Gi        RWO           Delete          Bound     default/datadir-cockroachdb-2             27s
    
  4. Use our cluster-init.yaml file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:

    icon/buttons/copy
    $ kubectl create \
    -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
    
    job.batch/cluster-init created
    
  5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered Ready:

    icon/buttons/copy
    $ kubectl get job cluster-init
    
    NAME           COMPLETIONS   DURATION   AGE
    cluster-init   1/1           7s         27s
    
    icon/buttons/copy
    $ kubectl get pods
    
    NAME                 READY   STATUS      RESTARTS   AGE
    cluster-init-cqf8l   0/1     Completed   0          56s
    cockroachdb-0        1/1     Running     0          7m51s
    cockroachdb-1        1/1     Running     0          7m51s
    cockroachdb-2        1/1     Running     0          7m51s
    
Tip:

The StatefulSet configuration sets all CockroachDB nodes to log to stderr, so if you ever need access to a pod/node's logs to troubleshoot, use kubectl logs <podname> rather than checking the log on the persistent volume.

  1. Install the Helm client (version 3.0 or higher) and add the cockroachdb chart repository:

    icon/buttons/copy
    $ helm repo add cockroachdb https://charts.cockroachdb.com/
    
    "cockroachdb" has been added to your repositories
    
  2. Update your Helm chart repositories to ensure that you're using the latest CockroachDB chart:

    icon/buttons/copy
    $ helm repo update
    
  3. The cluster configuration is set in the Helm chart's values file.

    Note:

    By default, the Helm chart specifies CPU and memory resources that are appropriate for the virtual machines used in this deployment example. On a production cluster, you should substitute values that are appropriate for your machines and workload. For details on configuring your deployment, see Configure the Cluster.

    Before deploying, modify some parameters in our Helm chart's values file:

    1. Create a local YAML file (e.g., my-values.yaml) to specify your custom values. These will be used to override the defaults in values.yaml.
    2. To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you must set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. We recommend setting conf.cache and conf.max-sql-memory each to 1/4 of the memory allocation specified in statefulset.resources.requests and statefulset.resources.limits.

      Tip:

      For example, if you are allocating 8Gi of memory to each CockroachDB node, allocate 2Gi to cache and 2Gi to max-sql-memory.

      icon/buttons/copy
      conf:
        cache: "2Gi"
        max-sql-memory: "2Gi"
      

      The Helm chart defaults to a secure deployment by automatically setting tls.enabled to true. For an insecure deployment, set tls.enabled to false:

      icon/buttons/copy
      tls:
        enabled: false
      

    Your values file should look similar to:

    icon/buttons/copy
    conf:
          cache: "2Gi"
          max-sql-memory: "2Gi"
    tls:
        enabled: false
    

    Refer to the CockroachDB Helm chart's values.yaml template.

  4. Install the CockroachDB Helm chart, specifying your custom values file.

    Provide a "release" name to identify and track this particular deployment of the chart, and override the default values with those in my-values.yaml.

    Note:

    This tutorial uses my-release as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.

    Warning:

    To allow the CockroachDB pods to successfully deploy, do not set the --wait flag when using Helm commands.

    icon/buttons/copy
    $ helm install my-release --values {custom-values}.yaml cockroachdb/cockroachdb
    
  5. Install the CockroachDB Helm chart.

    Provide a "release" name to identify and track this particular deployment of the chart.

    Note:

    This tutorial uses my-release as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.

    icon/buttons/copy
    $ helm install my-release cockroachdb/cockroachdb
    

    Behind the scenes, this command uses our cockroachdb-statefulset.yaml file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.

  6. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing 1/1 under READY and the pod for initialization showing COMPLETED under STATUS:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                                READY     STATUS      RESTARTS   AGE
    my-release-cockroachdb-0            1/1       Running     0          8m
    my-release-cockroachdb-1            1/1       Running     0          8m
    my-release-cockroachdb-2            1/1       Running     0          8m
    my-release-cockroachdb-init-hxzsc   0/1       Completed   0          1h
    
  7. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:

    icon/buttons/copy
    $ kubectl get pv
    
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                      STORAGECLASS   REASON    AGE
    pvc-71019b3a-fc67-11e8-a606-080027ba45e5   100Gi      RWO            Delete           Bound     default/datadir-my-release-cockroachdb-0   standard                 11m
    pvc-7108e172-fc67-11e8-a606-080027ba45e5   100Gi      RWO            Delete           Bound     default/datadir-my-release-cockroachdb-1   standard                 11m
    pvc-710dcb66-fc67-11e8-a606-080027ba45e5   100Gi      RWO            Delete           Bound     default/datadir-my-release-cockroachdb-2   standard                 11m
    
Tip:

The StatefulSet configuration sets all CockroachDB nodes to log to stderr, so if you ever need access to a pod/node's logs to troubleshoot, use kubectl logs <podname> rather than checking the log on the persistent volume.

Step 3. Use the built-in SQL client

  1. Launch a temporary interactive pod and start the built-in SQL client inside it:

    icon/buttons/copy
    $ kubectl run cockroachdb -it \
    --image=cockroachdb/cockroach:v24.3.0-rc.1 \
    --rm \
    --restart=Never \
    -- sql \
    --insecure \
    --host=cockroachdb-public
    
    icon/buttons/copy
    $ kubectl run cockroachdb -it \
    --image=cockroachdb/cockroach:v24.3.0-rc.1 \
    --rm \
    --restart=Never \
    -- sql \
    --insecure \
    --host=my-release-cockroachdb-public
    
  2. Run some basic CockroachDB SQL statements:

    icon/buttons/copy
    > CREATE DATABASE bank;
    
    icon/buttons/copy
    > CREATE TABLE bank.accounts (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
          balance DECIMAL
      );
    
    icon/buttons/copy
    > INSERT INTO bank.accounts (balance)
      VALUES
          (1000.50), (20000), (380), (500), (55000);
    
    icon/buttons/copy
    > SELECT * FROM bank.accounts;
    
                       id                  | balance
    +--------------------------------------+---------+
      6f123370-c48c-41ff-b384-2c185590af2b |     380
      990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50
      ac31c671-40bf-4a7b-8bee-452cff8a4026 |     500
      d58afd93-5be9-42ba-b2e2-dc00dcedf409 |   20000
      e6d8f696-87f5-4d3c-a377-8e152fdc27f7 |   55000
    (5 rows)
    
  3. Exit the SQL shell and delete the temporary pod:

    icon/buttons/copy
    > \q
    

Step 4. Access the DB Console

To access the cluster's DB Console:

  1. In a new terminal window, port-forward from your local machine to the cockroachdb-public service:

    icon/buttons/copy
    $ kubectl port-forward service/cockroachdb-public 8080
    
    icon/buttons/copy
    $ kubectl port-forward service/cockroachdb-public 8080
    
    icon/buttons/copy
    $ kubectl port-forward service/my-release-cockroachdb-public 8080
    
    Forwarding from 127.0.0.1:8080 -> 8080
    
    Note:
    The port-forward command must be run on the same machine as the web browser in which you want to view the DB Console. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.
  2. Go to http://localhost:8080.

  3. In the UI, verify that the cluster is running as expected:

    • View the Node List to ensure that all nodes successfully joined the cluster.
    • Click the Databases tab on the left to verify that bank is listed.

Step 5. Simulate node failure

Based on the replicas: 3 line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage.

To see this in action:

  1. Terminate one of the CockroachDB nodes:

    icon/buttons/copy
    $ kubectl delete pod cockroachdb-2
    
    pod "cockroachdb-2" deleted
    
    icon/buttons/copy
    $ kubectl delete pod cockroachdb-2
    
    pod "cockroachdb-2" deleted
    
    icon/buttons/copy
    $ kubectl delete pod my-release-cockroachdb-2
    
    pod "my-release-cockroachdb-2" deleted
    
  2. In the DB Console, the Cluster Overview will soon show one node as Suspect. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy.

  3. Back in the terminal, verify that the pod was automatically restarted:

    icon/buttons/copy
    $ kubectl get pod cockroachdb-2
    
    NAME            READY     STATUS    RESTARTS   AGE
    cockroachdb-2   1/1       Running   0          12s
    
    icon/buttons/copy
    $ kubectl get pod cockroachdb-2
    
    NAME            READY     STATUS    RESTARTS   AGE
    cockroachdb-2   1/1       Running   0          12s
    
    icon/buttons/copy
    $ kubectl get pod my-release-cockroachdb-2
    
    NAME                       READY     STATUS    RESTARTS   AGE
    my-release-cockroachdb-2   1/1       Running   0          44s
    

Step 6. Add nodes

  1. Use the kubectl scale command to add a pod for another CockroachDB node:

    icon/buttons/copy
    $ kubectl scale statefulset cockroachdb --replicas=4
    
    statefulset "cockroachdb" scaled
    
    icon/buttons/copy
    $ kubectl scale statefulset my-release-cockroachdb --replicas=4
    
    statefulset "my-release-cockroachdb" scaled
    
  2. Verify that the pod for a fourth node, cockroachdb-3, was added successfully:

    icon/buttons/copy
    $ kubectl get pods
    
    NAME                      READY     STATUS    RESTARTS   AGE
    cockroachdb-0             1/1       Running   0          28m
    cockroachdb-1             1/1       Running   0          27m
    cockroachdb-2             1/1       Running   0          10m
    cockroachdb-3             1/1       Running   0          5s
    example-545f866f5-2gsrs   1/1       Running   0          25m
    
    NAME                                 READY     STATUS    RESTARTS   AGE
    my-release-cockroachdb-0             1/1       Running   0          28m
    my-release-cockroachdb-1             1/1       Running   0          27m
    my-release-cockroachdb-2             1/1       Running   0          10m
    my-release-cockroachdb-3             1/1       Running   0          5s
    example-545f866f5-2gsrs              1/1       Running   0          25m
    

Step 7. Remove nodes

To safely remove a node from your cluster, you must first decommission the node and only then adjust the spec.replicas value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.

Warning:

If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see Prepare for graceful shutdown.

  1. Launch a temporary interactive pod and use the cockroach node status command to get the internal IDs of nodes:

    icon/buttons/copy
    $ kubectl run cockroachdb -it \
    --image=cockroachdb/cockroach:v24.3.0-rc.1 \
    --rm \
    --restart=Never \
    -- node status \
    --insecure \
    --host=cockroachdb-public
    
      id |               address                                     | build  |            started_at            |            updated_at            | is_available | is_live
    +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
       1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | v24.3.0-rc.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true         | true
       2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | v24.3.0-rc.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true         | true
       3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | v24.3.0-rc.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true         | true
       4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | v24.3.0-rc.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true         | true
    (4 rows)
    
    icon/buttons/copy
    $ kubectl run cockroachdb -it \
    --image=cockroachdb/cockroach:v24.3.0-rc.1 \
    --rm \
    --restart=Never \
    -- node status \
    --insecure \
    --host=my-release-cockroachdb-public
    
      id |                                     address                                     | build  |            started_at            |            updated_at            | is_available | is_live
    +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
       1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | v24.3.0-rc.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true         | true
       2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | v24.3.0-rc.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true         | true
       3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | v24.3.0-rc.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true         | true
       4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | v24.3.0-rc.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true         | true
    (4 rows)
    
  2. Note the ID of the node with the highest number in its address (in this case, the address including cockroachdb-3) and use the cockroach node decommission command to decommission it:

    Note:

    It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node.

    icon/buttons/copy
    $ kubectl run cockroachdb -it \
    --image=cockroachdb/cockroach:v24.3.0-rc.1 \
    --rm \
    --restart=Never \
    -- node decommission <node ID> \
    --insecure \
    --host=cockroachdb-public
    
    icon/buttons/copy
    $ kubectl run cockroachdb -it \
    --image=cockroachdb/cockroach:v24.3.0-rc.1 \
    --rm \
    --restart=Never \
    -- node decommission <node ID> \
    --insecure \
    --host=my-release-cockroachdb-public
    

    You'll then see the decommissioning status print to stderr as it changes:

      id | is_live | replicas | is_decommissioning |   membership    | is_draining
    -----+---------+----------+--------------------+-----------------+--------------
       4 |  true   |       73 |        true        | decommissioning |    false    
    

    Once the node has been fully decommissioned, you'll see a confirmation:

      id | is_live | replicas | is_decommissioning |   membership    | is_draining
    -----+---------+----------+--------------------+-----------------+--------------
       4 |  true   |        0 |        true        | decommissioning |    false    
    (1 row)
    
    No more data reported on target nodes. Please verify cluster health before removing the nodes.
    
  3. Once the node has been decommissioned, remove a pod from your StatefulSet:

    icon/buttons/copy
    $ kubectl scale statefulset cockroachdb --replicas=3
    
    statefulset "cockroachdb" scaled
    
    icon/buttons/copy
    $ helm upgrade \
    my-release \
    cockroachdb/cockroachdb \
    --set statefulset.replicas=3 \
    --reuse-values
    

Step 8. Stop the cluster

  • If you plan to restart the cluster, use the minikube stop command. This shuts down the minikube virtual machine but preserves all the resources you created:

    icon/buttons/copy
    $ minikube stop
    
    Stopping local Kubernetes cluster...
    Machine stopped.
    

    You can restore the cluster to its previous state with minikube start.

  • If you do not plan to restart the cluster, use the minikube delete command. This shuts down and deletes the minikube virtual machine and all the resources you created, including persistent volumes:

    icon/buttons/copy
    $ minikube delete
    
    Deleting local Kubernetes cluster...
    Machine deleted.
    
    Tip:
    To retain logs, copy them from each pod's stderr before deleting the cluster and all its resources. To access a pod's standard error stream, run kubectl logs <podname>.

See also

Explore other CockroachDB benefits and features:

You might also want to learn how to orchestrate a production deployment of CockroachDB with Kubernetes.


Yes No
On this page

Yes No