bash test/kbcli/test_kbcli_0.9.sh --type 29 --version 0.9.5 --generate-output true --chaos-mesh true --drain-node true --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-ylzrl ` `kubectl create namespace ns-ylzrl` namespace/ns-ylzrl created create namespace ns-ylzrl done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "0.9" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v0.9.5-beta.8` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 32.1M 100 32.1M 0 0 141M 0 --:--:-- --:--:-- --:--:-- 141M kbcli installed successfully. Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v0.9.5-beta.8 done Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Kubernetes Env: v1.32.6 POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default kubeblocks version is:0.9.5 skip upgrade kubeblocks Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check cluster definition set component name:clickhouse set component version set component version:clickhouse set service versions:22.3.18,22.3.20,22.9.4,24.8.3,25.4.4 set service versions sorted:22.3.18,22.3.20,22.9.4,24.8.3,25.4.4 no cluster version found unsupported component definition REPORT_COUNT 0:0 set replicas first:2,22.3.18|2,22.3.20|2,22.9.4|2,24.8.3|2,25.4.4 set replicas third:2,22.3.20 set replicas fourth:2,22.3.18 set minimum cmpv service version set minimum cmpv service version replicas:2,22.3.18 REPORT_COUNT:1 CLUSTER_TOPOLOGY:cluster topology cluster found in cluster definition clickhouse LIMIT_CPU:0.2 LIMIT_MEMORY:1 storage size: 20 No resources found in ns-ylzrl namespace. termination_policy:DoNotTerminate create 2 replica DoNotTerminate clickhouse cluster check cluster definition apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: clkhouse-fayllv namespace: ns-ylzrl spec: clusterDefinitionRef: clickhouse topology: cluster terminationPolicy: DoNotTerminate componentSpecs: - name: clickhouse serviceVersion: 22.3.18 replicas: 2 resources: requests: cpu: 200m memory: 1Gi limits: cpu: 200m memory: 1Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - name: ch-keeper serviceVersion: 22.3.18 replicas: 1 resources: requests: cpu: 200m memory: 1Gi limits: cpu: 200m memory: 1Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi `kubectl apply -f test_create_clkhouse-fayllv.yaml` cluster.apps.kubeblocks.io/clkhouse-fayllv created apply test_create_clkhouse-fayllv.yaml Success `rm -rf test_create_clkhouse-fayllv.yaml` check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Sep 01,2025 11:19 UTC+0800 clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check pod clkhouse-fayllv-clickhouse-0 container_name clickhouse exist password px4pt48rFuTURP3y check pod clkhouse-fayllv-clickhouse-0 container_name lorry exist password px4pt48rFuTURP3y No container logs contain secret password. describe cluster `kbcli cluster describe clkhouse-fayllv --namespace ns-ylzrl ` Name: clkhouse-fayllv Created Time: Sep 01,2025 11:19 UTC+0800 NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY ns-ylzrl clickhouse Running DoNotTerminate Endpoints: COMPONENT MODE INTERNAL EXTERNAL clickhouse ReadWrite clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:9000 clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:8123 clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:8443 clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:9004 clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:9005 clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:8001 clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:9009 clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:9010 clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local:9440 ch-keeper ReadWrite clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:8123 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:8443 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:9000 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:9009 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:9010 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:8001 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:9181 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:9234 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:9281 clkhouse-fayllv-ch-keeper.ns-ylzrl.svc.cluster.local:9440 Topology: COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME ch-keeper clkhouse-fayllv-ch-keeper-0 leader Running 0 aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clickhouse clkhouse-fayllv-clickhouse-0 Running 0 aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clickhouse clkhouse-fayllv-clickhouse-1 Running 0 aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 Resources Allocation: COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS clickhouse false 200m / 200m 1Gi / 1Gi data:20Gi default ch-keeper false 200m / 200m 1Gi / 1Gi data:20Gi default Images: COMPONENT TYPE IMAGE clickhouse docker.io/apecloud/clickhouse:22.3.18-debian-11-r3 ch-keeper docker.io/apecloud/clickhouse:22.3.18-debian-11-r3 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-ylzrl clkhouse-fayllv `kbcli cluster label clkhouse-fayllv app.kubernetes.io/instance- --namespace ns-ylzrl ` label "app.kubernetes.io/instance" not found. `kbcli cluster label clkhouse-fayllv app.kubernetes.io/instance=clkhouse-fayllv --namespace ns-ylzrl ` `kbcli cluster label clkhouse-fayllv --list --namespace ns-ylzrl ` NAME NAMESPACE LABELS clkhouse-fayllv ns-ylzrl app.kubernetes.io/instance=clkhouse-fayllv clusterdefinition.kubeblocks.io/name=clickhouse clusterversion.kubeblocks.io/name= label cluster app.kubernetes.io/instance=clkhouse-fayllv Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=clkhouse-fayllv --namespace ns-ylzrl ` `kbcli cluster label clkhouse-fayllv --list --namespace ns-ylzrl ` NAME NAMESPACE LABELS clkhouse-fayllv ns-ylzrl app.kubernetes.io/instance=clkhouse-fayllv case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=clickhouse clusterversion.kubeblocks.io/name= label cluster case.name=kbcli.test1 Success `kbcli cluster label clkhouse-fayllv case.name=kbcli.test2 --overwrite --namespace ns-ylzrl ` `kbcli cluster label clkhouse-fayllv --list --namespace ns-ylzrl ` NAME NAMESPACE LABELS clkhouse-fayllv ns-ylzrl app.kubernetes.io/instance=clkhouse-fayllv case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=clickhouse clusterversion.kubeblocks.io/name= label cluster case.name=kbcli.test2 Success `kbcli cluster label clkhouse-fayllv case.name- --namespace ns-ylzrl ` `kbcli cluster label clkhouse-fayllv --list --namespace ns-ylzrl ` NAME NAMESPACE LABELS clkhouse-fayllv ns-ylzrl app.kubernetes.io/instance=clkhouse-fayllv clusterdefinition.kubeblocks.io/name=clickhouse clusterversion.kubeblocks.io/name= delete cluster label case.name Success cluster connect `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT * FROM system.clusters"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash ` Defaulted container "clickhouse" out of: clickhouse, lorry, init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file default 1 1 1 clkhouse-fayllv-clickhouse-0.clkhouse-fayllv-clickhouse-headless.ns-ylzrl.svc.cluster.local 10.244.1.210 9000 1 admin 0 0 0 default 1 1 2 clkhouse-fayllv-clickhouse-1.clkhouse-fayllv-clickhouse-headless.ns-ylzrl.svc.cluster.local 10.244.1.220 9000 0 admin 0 0 0 connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-clkhouse-fayllv" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-clkhouse-fayllv --namespace ns-ylzrl ` Error from server (NotFound): pods "test-db-client-executionloop-clkhouse-fayllv" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-clkhouse-fayllv" not found `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-clkhouse-fayllv namespace: ns-ylzrl spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local" - "--user" - "admin" - "--password" - "px4pt48rFuTURP3y" - "--port" - "8123" - "--dbtype" - "clickhouse" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-clkhouse-fayllv.yaml` pod/test-db-client-executionloop-clkhouse-fayllv created apply test-db-client-executionloop-clkhouse-fayllv.yaml Success `rm -rf test-db-client-executionloop-clkhouse-fayllv.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 0/1 ContainerCreating 0 5s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 0/1 ContainerCreating 0 9s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 0/1 ContainerCreating 0 15s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 0/1 ContainerCreating 0 20s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 0/1 ContainerCreating 0 25s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 0/1 ContainerCreating 0 30s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 35s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 40s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 45s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 50s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 55s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 61s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 66s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 71s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 76s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 81s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 86s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 91s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 1/1 Running 0 96s check pod test-db-client-executionloop-clkhouse-fayllv status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-fayllv 0/1 Completed 0 102s check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --user admin --password px4pt48rFuTURP3y --port 8123 --dbtype clickhouse --test executionloop --duration 60 --interval 1 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] 03:29:59.576 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** 03:29:59.578 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * This driver is DEPRECATED. Please use [com.clickhouse.jdbc.ClickHouseDriver] instead. * 03:29:59.578 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * Also everything in package [ru.yandex.clickhouse] will be removed starting from 0.4.0. * 03:29:59.578 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** Execution loop start: create database executions_loop CREATE DATABASE IF NOT EXISTS executions_loop ON CLUSTER default; drop table executions_loop_table DROP TABLE IF EXISTS executions_loop.executions_loop_table ON CLUSTER default; create table executions_loop_table CREATE TABLE IF NOT EXISTS executions_loop.executions_loop_table ON CLUSTER default (id UInt32, value String) ENGINE = ReplicatedMergeTree() ORDER BY id; Execution loop start:INSERT INTO executions_loop.executions_loop_table (id, value) VALUES (1, 'executions_loop_test_1'); [ 1s ] executions total: 11 successful: 11 failed: 0 disconnect: 0 [ 2s ] executions total: 35 successful: 35 failed: 0 disconnect: 0 [ 3s ] executions total: 57 successful: 57 failed: 0 disconnect: 0 [ 4s ] executions total: 80 successful: 80 failed: 0 disconnect: 0 [ 5s ] executions total: 104 successful: 104 failed: 0 disconnect: 0 [ 6s ] executions total: 127 successful: 127 failed: 0 disconnect: 0 [ 7s ] executions total: 149 successful: 149 failed: 0 disconnect: 0 [ 8s ] executions total: 171 successful: 171 failed: 0 disconnect: 0 [ 9s ] executions total: 194 successful: 194 failed: 0 disconnect: 0 [ 10s ] executions total: 217 successful: 217 failed: 0 disconnect: 0 [ 11s ] executions total: 240 successful: 240 failed: 0 disconnect: 0 [ 12s ] executions total: 262 successful: 262 failed: 0 disconnect: 0 [ 13s ] executions total: 285 successful: 285 failed: 0 disconnect: 0 [ 14s ] executions total: 309 successful: 309 failed: 0 disconnect: 0 [ 15s ] executions total: 333 successful: 333 failed: 0 disconnect: 0 [ 16s ] executions total: 357 successful: 357 failed: 0 disconnect: 0 [ 17s ] executions total: 381 successful: 381 failed: 0 disconnect: 0 [ 18s ] executions total: 406 successful: 406 failed: 0 disconnect: 0 [ 19s ] executions total: 430 successful: 430 failed: 0 disconnect: 0 [ 20s ] executions total: 451 successful: 451 failed: 0 disconnect: 0 [ 21s ] executions total: 476 successful: 476 failed: 0 disconnect: 0 [ 22s ] executions total: 500 successful: 500 failed: 0 disconnect: 0 [ 23s ] executions total: 525 successful: 525 failed: 0 disconnect: 0 [ 24s ] executions total: 550 successful: 550 failed: 0 disconnect: 0 [ 25s ] executions total: 575 successful: 575 failed: 0 disconnect: 0 [ 26s ] executions total: 599 successful: 599 failed: 0 disconnect: 0 [ 27s ] executions total: 623 successful: 623 failed: 0 disconnect: 0 [ 28s ] executions total: 648 successful: 648 failed: 0 disconnect: 0 [ 29s ] executions total: 671 successful: 671 failed: 0 disconnect: 0 [ 30s ] executions total: 696 successful: 696 failed: 0 disconnect: 0 [ 31s ] executions total: 720 successful: 720 failed: 0 disconnect: 0 [ 32s ] executions total: 744 successful: 744 failed: 0 disconnect: 0 [ 33s ] executions total: 771 successful: 771 failed: 0 disconnect: 0 [ 34s ] executions total: 800 successful: 800 failed: 0 disconnect: 0 [ 35s ] executions total: 828 successful: 828 failed: 0 disconnect: 0 [ 36s ] executions total: 856 successful: 856 failed: 0 disconnect: 0 [ 37s ] executions total: 872 successful: 872 failed: 0 disconnect: 0 [ 38s ] executions total: 896 successful: 896 failed: 0 disconnect: 0 [ 39s ] executions total: 922 successful: 922 failed: 0 disconnect: 0 [ 40s ] executions total: 941 successful: 941 failed: 0 disconnect: 0 [ 41s ] executions total: 965 successful: 965 failed: 0 disconnect: 0 [ 42s ] executions total: 990 successful: 990 failed: 0 disconnect: 0 [ 43s ] executions total: 1014 successful: 1014 failed: 0 disconnect: 0 [ 44s ] executions total: 1038 successful: 1038 failed: 0 disconnect: 0 [ 45s ] executions total: 1062 successful: 1062 failed: 0 disconnect: 0 [ 46s ] executions total: 1086 successful: 1086 failed: 0 disconnect: 0 [ 47s ] executions total: 1110 successful: 1110 failed: 0 disconnect: 0 [ 48s ] executions total: 1133 successful: 1133 failed: 0 disconnect: 0 [ 49s ] executions total: 1155 successful: 1155 failed: 0 disconnect: 0 [ 50s ] executions total: 1179 successful: 1179 failed: 0 disconnect: 0 [ 51s ] executions total: 1202 successful: 1202 failed: 0 disconnect: 0 [ 52s ] executions total: 1226 successful: 1226 failed: 0 disconnect: 0 [ 53s ] executions total: 1250 successful: 1250 failed: 0 disconnect: 0 [ 54s ] executions total: 1274 successful: 1274 failed: 0 disconnect: 0 [ 55s ] executions total: 1298 successful: 1298 failed: 0 disconnect: 0 [ 56s ] executions total: 1322 successful: 1322 failed: 0 disconnect: 0 [ 57s ] executions total: 1346 successful: 1346 failed: 0 disconnect: 0 [ 58s ] executions total: 1369 successful: 1369 failed: 0 disconnect: 0 [ 60s ] executions total: 1388 successful: 1388 failed: 0 disconnect: 0 Test Result: Total Executions: 1388 Successful Executions: 1388 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: clickhouse Host: clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local Port: 8123 Database: Table: User: admin Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds Cluster: DB_CLIENT_BATCH_DATA_COUNT: 1388 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-clkhouse-fayllv --namespace ns-ylzrl ` pod/test-db-client-executionloop-clkhouse-fayllv patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-clkhouse-fayllv" force deleted cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in clkhouse-fayllv namespace. `kbcli cluster hscale clkhouse-fayllv --auto-approve --force=true --components clickhouse --replicas 3 --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-horizontalscaling-d4qzl created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-horizontalscaling-d4qzl -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-horizontalscaling-d4qzl ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Creating -/- Sep 01,2025 11:31 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating [Error] check cluster status timeout --------------------------------------get cluster clkhouse-fayllv yaml-------------------------------------- `kubectl get cluster clkhouse-fayllv -o yaml --namespace ns-ylzrl ` apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: annotations: kubeblocks.io/ops-request: '[***"name":"clkhouse-fayllv-horizontalscaling-d4qzl","type":"HorizontalScaling"***]' kubeblocks.io/reconcile: "2025-09-01T03:37:13.667180644Z" kubectl.kubernetes.io/last-applied-configuration: | ***"apiVersion":"apps.kubeblocks.io/v1alpha1","kind":"Cluster","metadata":***"annotations":***,"name":"clkhouse-fayllv","namespace":"ns-ylzrl"***,"spec":***"clusterDefinitionRef":"clickhouse","componentSpecs":[***"name":"clickhouse","replicas":2,"resources":***"limits":***"cpu":"200m","memory":"1Gi"***,"requests":***"cpu":"200m","memory":"1Gi"***,"serviceVersion":"22.3.18","volumeClaimTemplates":[***"name":"data","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"20Gi"***,"storageClassName":null***]***,***"name":"ch-keeper","replicas":1,"resources":***"limits":***"cpu":"200m","memory":"1Gi"***,"requests":***"cpu":"200m","memory":"1Gi"***,"serviceVersion":"22.3.18","volumeClaimTemplates":[***"name":"data","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"20Gi"***,"storageClassName":null***]***],"terminationPolicy":"DoNotTerminate","topology":"cluster"*** creationTimestamp: "2025-09-01T03:19:19Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 4 labels: app.kubernetes.io/instance: clkhouse-fayllv clusterdefinition.kubeblocks.io/name: clickhouse clusterversion.kubeblocks.io/name: "" name: clkhouse-fayllv namespace: ns-ylzrl resourceVersion: "26589" uid: 81a2f6d5-d6f7-46ba-b5aa-c4b1ea8b7430 spec: clusterDefinitionRef: clickhouse componentSpecs: - componentDef: clickhouse-24 name: clickhouse replicas: 3 resources: limits: cpu: 200m memory: 1Gi requests: cpu: 200m memory: 1Gi serviceVersion: 22.3.18 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - componentDef: ch-keeper-24 name: ch-keeper replicas: 1 resources: limits: cpu: 200m memory: 1Gi requests: cpu: 200m memory: 1Gi serviceVersion: 22.3.18 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi resources: cpu: "0" memory: "0" storage: size: "0" terminationPolicy: DoNotTerminate topology: cluster status: clusterDefGeneration: 1 components: ch-keeper: phase: Running podsReady: true podsReadyTime: "2025-09-01T03:26:07Z" clickhouse: phase: Updating podsReady: false podsReadyTime: "2025-09-01T03:29:14Z" conditions: - lastTransitionTime: "2025-09-01T03:19:19Z" message: 'The operator has started the provisioning of Cluster: clkhouse-fayllv' observedGeneration: 4 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2025-09-01T03:19:19Z" message: Successfully applied for resources observedGeneration: 4 reason: ApplyResourcesSucceed status: "True" type: ApplyResources - lastTransitionTime: "2025-09-01T03:31:10Z" message: 'pods are not ready in Components: [clickhouse], refer to related component message in Cluster.status.components' reason: ReplicasNotReady status: "False" type: ReplicasReady - lastTransitionTime: "2025-09-01T03:31:10Z" message: 'pods are unavailable in Components: [clickhouse], refer to related component message in Cluster.status.components' reason: ComponentsNotReady status: "False" type: Ready observedGeneration: 4 phase: Updating ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster clkhouse-fayllv-------------------------------------- `kubectl describe cluster clkhouse-fayllv --namespace ns-ylzrl ` Name: clkhouse-fayllv Namespace: ns-ylzrl Labels: app.kubernetes.io/instance=clkhouse-fayllv clusterdefinition.kubeblocks.io/name=clickhouse clusterversion.kubeblocks.io/name= Annotations: kubeblocks.io/ops-request: [***"name":"clkhouse-fayllv-horizontalscaling-d4qzl","type":"HorizontalScaling"***] kubeblocks.io/reconcile: 2025-09-01T03:37:13.667180644Z API Version: apps.kubeblocks.io/v1alpha1 Kind: Cluster Metadata: Creation Timestamp: 2025-09-01T03:19:19Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 4 Resource Version: 26589 UID: 81a2f6d5-d6f7-46ba-b5aa-c4b1ea8b7430 Spec: Cluster Definition Ref: clickhouse Component Specs: Component Def: clickhouse-24 Name: clickhouse Replicas: 3 Resources: Limits: Cpu: 200m Memory: 1Gi Requests: Cpu: 200m Memory: 1Gi Service Version: 22.3.18 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 20Gi Component Def: ch-keeper-24 Name: ch-keeper Replicas: 1 Resources: Limits: Cpu: 200m Memory: 1Gi Requests: Cpu: 200m Memory: 1Gi Service Version: 22.3.18 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 20Gi Resources: Cpu: 0 Memory: 0 Storage: Size: 0 Termination Policy: DoNotTerminate Topology: cluster Status: Cluster Def Generation: 1 Components: Ch - Keeper: Phase: Running Pods Ready: true Pods Ready Time: 2025-09-01T03:26:07Z Clickhouse: Phase: Updating Pods Ready: false Pods Ready Time: 2025-09-01T03:29:14Z Conditions: Last Transition Time: 2025-09-01T03:19:19Z Message: The operator has started the provisioning of Cluster: clkhouse-fayllv Observed Generation: 4 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2025-09-01T03:19:19Z Message: Successfully applied for resources Observed Generation: 4 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Last Transition Time: 2025-09-01T03:31:10Z Message: pods are not ready in Components: [clickhouse], refer to related component message in Cluster.status.components Reason: ReplicasNotReady Status: False Type: ReplicasReady Last Transition Time: 2025-09-01T03:31:10Z Message: pods are unavailable in Components: [clickhouse], refer to related component message in Cluster.status.components Reason: ComponentsNotReady Status: False Type: Ready Observed Generation: 4 Phase: Updating Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ComponentPhaseTransition 12m (x2 over 19m) cluster-controller component is Creating Normal ComponentPhaseTransition 9m15s (x2 over 12m) cluster-controller component is Running Normal AllReplicasReady 9m15s cluster-controller all pods of components are ready, waiting for the probe detection successful Normal ClusterReady 9m15s cluster-controller Cluster: clkhouse-fayllv is ready, current phase is Running Normal Running 9m15s cluster-controller Cluster: clkhouse-fayllv is ready, current phase is Running Normal PreCheckSucceed 7m20s (x4 over 19m) cluster-controller The operator has started the provisioning of Cluster: clkhouse-fayllv Normal ApplyResourcesSucceed 7m20s (x4 over 19m) cluster-controller Successfully applied for resources Warning ReplicasNotReady 7m19s (x2 over 12m) cluster-controller pods are not ready in Components: [clickhouse], refer to related component message in Cluster.status.components Warning ComponentsNotReady 7m19s (x2 over 12m) cluster-controller pods are unavailable in Components: [clickhouse], refer to related component message in Cluster.status.components Normal HorizontalScale 7m19s (x2 over 7m19s) component-controller start horizontal scale component clickhouse of cluster clkhouse-fayllv from 2 to 3 Normal ComponentPhaseTransition 7m19s cluster-controller component is Updating Warning FailedAttachVolume 76s (x2 over 3m17s) event-controller Pod clkhouse-fayllv-clickhouse-2: AttachVolume.Attach failed for volume "pvc-80a4cf93-5c18-46a2-ac00-24e0057bdd40" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-yfmqkigg-group_cicd-aks-yfmqkigg_eastus/providers/Microsoft.Compute/disks/pvc-80a4cf93-5c18-46a2-ac00-24e0057bdd40 ------------------------------------------------------------------------------------------------------------------ check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Init:0/1 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:31 UTC+0800 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 check pod status done check cluster status again No resources found in clkhouse-fayllv namespace. check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-horizontalscaling-d4qzl ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 11:31 UTC+0800 ops_status:clkhouse-fayllv-horizontalscaling-d4qzl ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 11:31 UTC+0800 ops_status:clkhouse-fayllv-horizontalscaling-d4qzl ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 11:31 UTC+0800 ops_status:clkhouse-fayllv-horizontalscaling-d4qzl ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 11:31 UTC+0800 ops_status:clkhouse-fayllv-horizontalscaling-d4qzl ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 11:31 UTC+0800 check ops status done ops_status:clkhouse-fayllv-horizontalscaling-d4qzl ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Succeed 1/1 Sep 01,2025 11:31 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-horizontalscaling-d4qzl --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-horizontalscaling-d4qzl patched `kbcli cluster delete-ops --name clkhouse-fayllv-horizontalscaling-d4qzl --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-horizontalscaling-d4qzl deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in clkhouse-fayllv namespace. `kbcli cluster hscale clkhouse-fayllv --auto-approve --force=true --components clickhouse --replicas 3 --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-horizontalscaling-jw8xg created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-horizontalscaling-jw8xg -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-horizontalscaling-jw8xg ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Sep 01,2025 11:40 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:31 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done No resources found in clkhouse-fayllv namespace. check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-horizontalscaling-jw8xg ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Succeed 0/0 Sep 01,2025 11:40 UTC+0800 check ops status done ops_status:clkhouse-fayllv-horizontalscaling-jw8xg ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Succeed 0/0 Sep 01,2025 11:40 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-horizontalscaling-jw8xg --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-horizontalscaling-jw8xg patched `kbcli cluster delete-ops --name clkhouse-fayllv-horizontalscaling-jw8xg --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-horizontalscaling-jw8xg deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover oom check cluster status before cluster-failover-oom check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-clkhouse-fayllv" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-oom-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all stressors: memory: workers: 1 size: "100GB" oomScoreAdj: -1000 duration: 2m `kubectl apply -f test-chaos-mesh-oom-clkhouse-fayllv.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-oom-clkhouse-fayllv created apply test-chaos-mesh-oom-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-oom-clkhouse-fayllv.yaml` check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:31 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-oom-clkhouse-fayllv" force deleted stresschaos.chaos-mesh.org/test-chaos-mesh-oom-clkhouse-fayllv patched check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover oom Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover networkbandwidthover check cluster status before cluster-failover-networkbandwidthover check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-clkhouse-fayllv" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkbandwidthover-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 action: bandwidth mode: all bandwidth: rate: '1bps' limit: 20971520 buffer: 10000 duration: 2m `kubectl apply -f test-chaos-mesh-networkbandwidthover-clkhouse-fayllv.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkbandwidthover-clkhouse-fayllv created apply test-chaos-mesh-networkbandwidthover-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-networkbandwidthover-clkhouse-fayllv.yaml` networkbandwidthover chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:31 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-clkhouse-fayllv" force deleted networkchaos.chaos-mesh.org/test-chaos-mesh-networkbandwidthover-clkhouse-fayllv patched check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover networkbandwidthover Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover timeoffset check cluster status before cluster-failover-timeoffset check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-clkhouse-fayllv" not found Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: TimeChaos metadata: name: test-chaos-mesh-timeoffset-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all timeOffset: '-10m' clockIds: - CLOCK_REALTIME duration: 2m `kubectl apply -f test-chaos-mesh-timeoffset-clkhouse-fayllv.yaml` timechaos.chaos-mesh.org/test-chaos-mesh-timeoffset-clkhouse-fayllv created apply test-chaos-mesh-timeoffset-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-timeoffset-clkhouse-fayllv.yaml` timeoffset chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:31 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-clkhouse-fayllv" force deleted timechaos.chaos-mesh.org/test-chaos-mesh-timeoffset-clkhouse-fayllv patched check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover timeoffset Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover networkduplicate check cluster status before cluster-failover-networkduplicate check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-clkhouse-fayllv" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkduplicate-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all action: duplicate duplicate: duplicate: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkduplicate-clkhouse-fayllv.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkduplicate-clkhouse-fayllv created apply test-chaos-mesh-networkduplicate-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-networkduplicate-clkhouse-fayllv.yaml` networkduplicate chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:31 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-clkhouse-fayllv --namespace ns-ylzrl ` networkchaos.chaos-mesh.org/test-chaos-mesh-networkduplicate-clkhouse-fayllv patched check failover pod name Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-clkhouse-fayllv" force deleted failover pod name:clkhouse-fayllv-clickhouse-0 failover networkduplicate Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover drainnode check cluster status before cluster-failover-drainnode check cluster status done cluster_status:Running check node drain check node drain success kubectl get pod clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -o jsonpath='***.spec.nodeName***' get node name:aks-cicdamdpool-15164480-vmss000005 success check if multiple pods are on the same node kubectl get pod clkhouse-fayllv-clickhouse-1 --namespace ns-ylzrl -o jsonpath='***.spec.nodeName***' get node name:aks-cicdamdpool-15164480-vmss000005 success Multiple pods on the same node check component ch-keeper exists `kubectl get components -l app.kubernetes.io/instance=clkhouse-fayllv,apps.kubeblocks.io/component-name=ch-keeper --namespace ns-ylzrl | (grep "ch-keeper" || true )` `kubectl get pvc -l app.kubernetes.io/instance=clkhouse-fayllv,apps.kubeblocks.io/component-name=ch-keeper,apps.kubeblocks.io/vct-name=data --namespace ns-ylzrl ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in clkhouse-fayllv namespace. `kbcli cluster volume-expand clkhouse-fayllv --auto-approve --force=true --components ch-keeper --volume-claim-templates data --storage 21Gi --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-volumeexpansion-ktwc6 created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-volumeexpansion-ktwc6 -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-volumeexpansion-ktwc6 ns-ylzrl VolumeExpansion clkhouse-fayllv ch-keeper Creating -/- Sep 01,2025 11:47 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:31 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done No resources found in clkhouse-fayllv namespace. check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-volumeexpansion-ktwc6 ns-ylzrl VolumeExpansion clkhouse-fayllv ch-keeper Succeed 1/1 Sep 01,2025 11:47 UTC+0800 check ops status done ops_status:clkhouse-fayllv-volumeexpansion-ktwc6 ns-ylzrl VolumeExpansion clkhouse-fayllv ch-keeper Succeed 1/1 Sep 01,2025 11:47 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-volumeexpansion-ktwc6 --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-volumeexpansion-ktwc6 patched `kbcli cluster delete-ops --name clkhouse-fayllv-volumeexpansion-ktwc6 --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-volumeexpansion-ktwc6 deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "CREATE TABLE test_kbcli (id Int32,name String) ENGINE = MergeTree() ORDER BY id;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` `echo "clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password \"px4pt48rFuTURP3y\" --query \"INSERT INTO test_kbcli VALUES (1,'lgewp');\" " | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` Defaulted container "clickhouse" out of: clickhouse, lorry, init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file Defaulted container "clickhouse" out of: clickhouse, lorry, init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file Received exception from server (version 22.3.18): Code: 60. DB::Exception: Received from clkhouse-fayllv-clickhouse:9000. DB::Exception: Table default.test_kbcli doesn't exist. (UNKNOWN_TABLE) (query: INSERT INTO test_kbcli VALUES (1,'lgewp');) command terminated with exit code 60 `echo "clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password \"px4pt48rFuTURP3y\" --query \"INSERT INTO test_kbcli VALUES (1,'lgewp');\" " | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` Defaulted container "clickhouse" out of: clickhouse, lorry, init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file Defaulted container "clickhouse" out of: clickhouse, lorry, init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file `clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT * FROM test_kbcli;"` Defaulted container "clickhouse" out of: clickhouse, lorry, init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file exec return msg:1 lgewp check msg:[lgewp] equal msg:[1 lgewp] test failover networkdelay check cluster status before cluster-failover-networkdelay check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelay-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-clkhouse-fayllv" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkdelay-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all action: delay delay: latency: 2000ms correlation: '100' jitter: 0ms direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkdelay-clkhouse-fayllv.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkdelay-clkhouse-fayllv created apply test-chaos-mesh-networkdelay-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-networkdelay-clkhouse-fayllv.yaml` networkdelay chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:26 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 200m / 200m 1Gi / 1Gi data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:31 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelay-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-clkhouse-fayllv" force deleted networkchaos.chaos-mesh.org/test-chaos-mesh-networkdelay-clkhouse-fayllv patched check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover networkdelay Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale clkhouse-fayllv --auto-approve --force=true --components clickhouse --cpu 300m --memory 1.1Gi --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-verticalscaling-nw2mg created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-verticalscaling-nw2mg -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-verticalscaling-nw2mg ns-ylzrl VerticalScaling clkhouse-fayllv clickhouse Creating -/- Sep 01,2025 12:04 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:06 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-verticalscaling-nw2mg ns-ylzrl VerticalScaling clkhouse-fayllv clickhouse Succeed 3/3 Sep 01,2025 12:04 UTC+0800 check ops status done ops_status:clkhouse-fayllv-verticalscaling-nw2mg ns-ylzrl VerticalScaling clkhouse-fayllv clickhouse Succeed 3/3 Sep 01,2025 12:04 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-verticalscaling-nw2mg --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-verticalscaling-nw2mg patched `kbcli cluster delete-ops --name clkhouse-fayllv-verticalscaling-nw2mg --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-verticalscaling-nw2mg deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover dnsrandom check cluster status before cluster-failover-dnsrandom check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-clkhouse-fayllv" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnsrandom-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all action: random duration: 2m `kubectl apply -f test-chaos-mesh-dnsrandom-clkhouse-fayllv.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnsrandom-clkhouse-fayllv created apply test-chaos-mesh-dnsrandom-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-dnsrandom-clkhouse-fayllv.yaml` dnsrandom chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:06 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:20Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:20Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-clkhouse-fayllv" force deleted Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-clkhouse-fayllv" not found check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover dnsrandom Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success `kubectl get pvc -l app.kubernetes.io/instance=clkhouse-fayllv,apps.kubeblocks.io/component-name=clickhouse,apps.kubeblocks.io/vct-name=data --namespace ns-ylzrl ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in clkhouse-fayllv namespace. `kbcli cluster volume-expand clkhouse-fayllv --auto-approve --force=true --components clickhouse --volume-claim-templates data --storage 21Gi --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-volumeexpansion-7fjhh created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-volumeexpansion-7fjhh -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-volumeexpansion-7fjhh ns-ylzrl VolumeExpansion clkhouse-fayllv clickhouse Creating -/- Sep 01,2025 12:08 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:06 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done No resources found in clkhouse-fayllv namespace. check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-volumeexpansion-7fjhh ns-ylzrl VolumeExpansion clkhouse-fayllv clickhouse Succeed 3/3 Sep 01,2025 12:08 UTC+0800 check ops status done ops_status:clkhouse-fayllv-volumeexpansion-7fjhh ns-ylzrl VolumeExpansion clkhouse-fayllv clickhouse Succeed 3/3 Sep 01,2025 12:08 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-volumeexpansion-7fjhh --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-volumeexpansion-7fjhh patched `kbcli cluster delete-ops --name clkhouse-fayllv-volumeexpansion-7fjhh --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-volumeexpansion-7fjhh deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover networkcorruptover check cluster status before cluster-failover-networkcorruptover check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-clkhouse-fayllv" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkcorruptover-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all action: corrupt corrupt: corrupt: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkcorruptover-clkhouse-fayllv.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkcorruptover-clkhouse-fayllv created apply test-chaos-mesh-networkcorruptover-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-networkcorruptover-clkhouse-fayllv.yaml` networkcorruptover chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:06 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-clkhouse-fayllv" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-clkhouse-fayllv" not found check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover networkcorruptover Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover podkill check cluster status before cluster-failover-podkill check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podkill-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podkill-clkhouse-fayllv" not found Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podkill-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podkill-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all action: pod-kill `kubectl apply -f test-chaos-mesh-podkill-clkhouse-fayllv.yaml` podchaos.chaos-mesh.org/test-chaos-mesh-podkill-clkhouse-fayllv created apply test-chaos-mesh-podkill-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-podkill-clkhouse-fayllv.yaml` check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 200m / 200m 1Gi / 1Gi data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podkill-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. podchaos.chaos-mesh.org "test-chaos-mesh-podkill-clkhouse-fayllv" force deleted podchaos.chaos-mesh.org/test-chaos-mesh-podkill-clkhouse-fayllv patched check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover podkill Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success check component ch-keeper exists `kubectl get components -l app.kubernetes.io/instance=clkhouse-fayllv,apps.kubeblocks.io/component-name=ch-keeper --namespace ns-ylzrl | (grep "ch-keeper" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale clkhouse-fayllv --auto-approve --force=true --components ch-keeper --cpu 300m --memory 1.1Gi --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-verticalscaling-l9gcn created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-verticalscaling-l9gcn -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-verticalscaling-l9gcn ns-ylzrl VerticalScaling clkhouse-fayllv ch-keeper Sep 01,2025 12:14 UTC+0800 ops_status:clkhouse-fayllv-verticalscaling-l9gcn ns-ylzrl VerticalScaling clkhouse-fayllv ch-keeper Creating -/- Sep 01,2025 12:14 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-verticalscaling-l9gcn ns-ylzrl VerticalScaling clkhouse-fayllv ch-keeper Succeed 1/1 Sep 01,2025 12:14 UTC+0800 check ops status done ops_status:clkhouse-fayllv-verticalscaling-l9gcn ns-ylzrl VerticalScaling clkhouse-fayllv ch-keeper Succeed 1/1 Sep 01,2025 12:14 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-verticalscaling-l9gcn --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-verticalscaling-l9gcn patched `kbcli cluster delete-ops --name clkhouse-fayllv-verticalscaling-l9gcn --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-verticalscaling-l9gcn deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover networkpartition check cluster status before cluster-failover-networkpartition check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-clkhouse-fayllv" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkpartition-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 action: partition mode: all target: mode: all selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkpartition-clkhouse-fayllv.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkpartition-clkhouse-fayllv created apply test-chaos-mesh-networkpartition-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-networkpartition-clkhouse-fayllv.yaml` networkpartition chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-clkhouse-fayllv" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-clkhouse-fayllv" not found check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover networkpartition Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover fullcpu check cluster status before cluster-failover-fullcpu check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpu-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-clkhouse-fayllv" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-fullcpu-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all stressors: cpu: workers: 100 load: 100 duration: 2m `kubectl apply -f test-chaos-mesh-fullcpu-clkhouse-fayllv.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-fullcpu-clkhouse-fayllv created apply test-chaos-mesh-fullcpu-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-fullcpu-clkhouse-fayllv.yaml` fullcpu chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpu-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-clkhouse-fayllv" force deleted Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-clkhouse-fayllv" not found check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover fullcpu Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover dnserror check cluster status before cluster-failover-dnserror check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-clkhouse-fayllv" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnserror-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all action: error duration: 2m `kubectl apply -f test-chaos-mesh-dnserror-clkhouse-fayllv.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnserror-clkhouse-fayllv created apply test-chaos-mesh-dnserror-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-dnserror-clkhouse-fayllv.yaml` dnserror chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:05 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:04 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-clkhouse-fayllv" force deleted Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-clkhouse-fayllv" not found check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover dnserror Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart clkhouse-fayllv --auto-approve --force=true --components clickhouse --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-restart-zlc5d created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-restart-zlc5d -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-restart-zlc5d ns-ylzrl Restart clkhouse-fayllv clickhouse Creating -/- Sep 01,2025 12:22 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:23 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:22 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:22 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-restart-zlc5d ns-ylzrl Restart clkhouse-fayllv clickhouse Succeed 3/3 Sep 01,2025 12:22 UTC+0800 check ops status done ops_status:clkhouse-fayllv-restart-zlc5d ns-ylzrl Restart clkhouse-fayllv clickhouse Succeed 3/3 Sep 01,2025 12:22 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-restart-zlc5d --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-restart-zlc5d patched `kbcli cluster delete-ops --name clkhouse-fayllv-restart-zlc5d --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-restart-zlc5d deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover kill1 check cluster status before cluster-failover-kill1 check cluster status done cluster_status:Running check node drain check node drain success `kill 1` Defaulted container "clickhouse" out of: clickhouse, lorry, init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file exec return message: check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:23 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:22 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:22 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover kill1 Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover connectionstress check cluster status before cluster-failover-connectionstress check cluster status done cluster_status:Running check node drain check node drain success Error from server (NotFound): pods "test-db-client-connectionstress-clkhouse-fayllv" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-clkhouse-fayllv --namespace ns-ylzrl ` Error from server (NotFound): pods "test-db-client-connectionstress-clkhouse-fayllv" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-clkhouse-fayllv" not found `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-clkhouse-fayllv namespace: ns-ylzrl spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local" - "--user" - "admin" - "--password" - "px4pt48rFuTURP3y" - "--port" - "8123" - "--database" - "default" - "--dbtype" - "clickhouse" - "--test" - "connectionstress" - "--connections" - "4096" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-clkhouse-fayllv.yaml` pod/test-db-client-connectionstress-clkhouse-fayllv created apply test-db-client-connectionstress-clkhouse-fayllv.yaml Success `rm -rf test-db-client-connectionstress-clkhouse-fayllv.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-fayllv 1/1 Running 0 5s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-fayllv 1/1 Running 0 9s check pod test-db-client-connectionstress-clkhouse-fayllv status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-fayllv 0/1 Completed 0 14s check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:23 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:22 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:22 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --user admin --password px4pt48rFuTURP3y --port 8123 --database default --dbtype clickhouse --test connectionstress --connections 4096 --duration 60 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] 04:24:36.488 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** 04:24:36.490 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * This driver is DEPRECATED. Please use [com.clickhouse.jdbc.ClickHouseDriver] instead. * 04:24:36.490 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * Also everything in package [ru.yandex.clickhouse] will be removed starting from 0.4.0. * 04:24:36.490 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** Test Result: null Connection Information: Database Type: clickhouse Host: clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local Port: 8123 Database: default Table: User: admin Org: Access Mode: mysql Test Type: connectionstress Connection Count: 4096 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-clkhouse-fayllv --namespace ns-ylzrl ` pod/test-db-client-connectionstress-clkhouse-fayllv patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-clkhouse-fayllv" force deleted check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover connectionstress Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover networklossover check cluster status before cluster-failover-networklossover check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-clkhouse-fayllv" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networklossover-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all action: loss loss: loss: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networklossover-clkhouse-fayllv.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networklossover-clkhouse-fayllv created apply test-chaos-mesh-networklossover-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-networklossover-clkhouse-fayllv.yaml` networklossover chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:14 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:23 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:22 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:22 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org/test-chaos-mesh-networklossover-clkhouse-fayllv patched check failover pod name networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-clkhouse-fayllv" force deleted failover pod name:clkhouse-fayllv-clickhouse-0 failover networklossover Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart clkhouse-fayllv --auto-approve --force=true --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-restart-w67kk created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-restart-w67kk -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-restart-w67kk ns-ylzrl Restart clkhouse-fayllv clickhouse,ch-keeper Sep 01,2025 12:27 UTC+0800 ops_status:clkhouse-fayllv-restart-w67kk ns-ylzrl Restart clkhouse-fayllv clickhouse,ch-keeper Creating -/- Sep 01,2025 12:27 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:27 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:28 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:28 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:27 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-restart-w67kk ns-ylzrl Restart clkhouse-fayllv clickhouse,ch-keeper Succeed 4/4 Sep 01,2025 12:27 UTC+0800 check ops status done ops_status:clkhouse-fayllv-restart-w67kk ns-ylzrl Restart clkhouse-fayllv clickhouse,ch-keeper Succeed 4/4 Sep 01,2025 12:27 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-restart-w67kk --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-restart-w67kk patched `kbcli cluster delete-ops --name clkhouse-fayllv-restart-w67kk --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-restart-w67kk deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success test failover podfailure check cluster status before cluster-failover-podfailure check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-clkhouse-fayllv" not found Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-clkhouse-fayllv" not found apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podfailure-clkhouse-fayllv namespace: ns-ylzrl spec: selector: namespaces: - ns-ylzrl labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-fayllv-clickhouse-0 mode: all action: pod-failure duration: 2m `kubectl apply -f test-chaos-mesh-podfailure-clkhouse-fayllv.yaml` podchaos.chaos-mesh.org/test-chaos-mesh-podfailure-clkhouse-fayllv created apply test-chaos-mesh-podfailure-clkhouse-fayllv.yaml Success `rm -rf test-chaos-mesh-podfailure-clkhouse-fayllv.yaml` podfailure chaos test waiting 120 seconds check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:27 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:28 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:28 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:27 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-clkhouse-fayllv --namespace ns-ylzrl ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-clkhouse-fayllv" force deleted Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-clkhouse-fayllv" not found check failover pod name failover pod name:clkhouse-fayllv-clickhouse-0 failover podfailure Success `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data Success cluster hscale offline instances apiVersion: apps.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-fayllv-hscaleoffinstance- labels: app.kubernetes.io/instance: clkhouse-fayllv app.kubernetes.io/managed-by: kubeblocks namespace: ns-ylzrl spec: type: HorizontalScaling clusterName: clkhouse-fayllv force: true horizontalScaling: - componentName: clickhouse scaleIn: onlineInstancesToOffline: - clkhouse-fayllv-clickhouse-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_clkhouse-fayllv.yaml` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-hscaleoffinstance-n8x5r created create test_ops_cluster_clkhouse-fayllv.yaml Success `rm -rf test_ops_cluster_clkhouse-fayllv.yaml` check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-hscaleoffinstance-n8x5r ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Creating -/- Sep 01,2025 12:31 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:27 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:28 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:27 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-1 --namespace ns-ylzrl -- bash` check cluster connect done check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-hscaleoffinstance-n8x5r ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Succeed 1/1 Sep 01,2025 12:31 UTC+0800 check ops status done ops_status:clkhouse-fayllv-hscaleoffinstance-n8x5r ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Succeed 1/1 Sep 01,2025 12:31 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-hscaleoffinstance-n8x5r --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-hscaleoffinstance-n8x5r patched `kbcli cluster delete-ops --name clkhouse-fayllv-hscaleoffinstance-n8x5r --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-hscaleoffinstance-n8x5r deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-1 --namespace ns-ylzrl -- bash` check db_client batch data Success cluster hscale online instances apiVersion: apps.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-fayllv-hscaleoninstance- labels: app.kubernetes.io/instance: clkhouse-fayllv app.kubernetes.io/managed-by: kubeblocks namespace: ns-ylzrl spec: type: HorizontalScaling clusterName: clkhouse-fayllv force: true horizontalScaling: - componentName: clickhouse scaleOut: offlineInstancesToOnline: - clkhouse-fayllv-clickhouse-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_clkhouse-fayllv.yaml` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-hscaleoninstance-24g65 created create test_ops_cluster_clkhouse-fayllv.yaml Success `rm -rf test_ops_cluster_clkhouse-fayllv.yaml` check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-hscaleoninstance-24g65 ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Pending -/- Sep 01,2025 12:31 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:27 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Init:0/1 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:31 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:28 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:27 UTC+0800 pod_status:Init:0/1 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-hscaleoninstance-24g65 ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 12:31 UTC+0800 ops_status:clkhouse-fayllv-hscaleoninstance-24g65 ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 12:31 UTC+0800 ops_status:clkhouse-fayllv-hscaleoninstance-24g65 ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 12:31 UTC+0800 ops_status:clkhouse-fayllv-hscaleoninstance-24g65 ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 12:31 UTC+0800 ops_status:clkhouse-fayllv-hscaleoninstance-24g65 ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Running 0/1 Sep 01,2025 12:31 UTC+0800 check ops status done ops_status:clkhouse-fayllv-hscaleoninstance-24g65 ns-ylzrl HorizontalScaling clkhouse-fayllv clickhouse Succeed 1/1 Sep 01,2025 12:31 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-hscaleoninstance-24g65 --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-hscaleoninstance-24g65 patched `kbcli cluster delete-ops --name clkhouse-fayllv-hscaleoninstance-24g65 --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-hscaleoninstance-24g65 deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data retry times: 1 check db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop clkhouse-fayllv --auto-approve --force=true --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-stop-2kntk created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-stop-2kntk -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-stop-2kntk ns-ylzrl Stop clkhouse-fayllv Pending -/- Sep 01,2025 12:32 UTC+0800 ops_status:clkhouse-fayllv-stop-2kntk ns-ylzrl Stop clkhouse-fayllv Creating -/- Sep 01,2025 12:32 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Stopping Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-stop-2kntk ns-ylzrl Stop clkhouse-fayllv ch-keeper,clickhouse Succeed 4/4 Sep 01,2025 12:32 UTC+0800 check ops status done ops_status:clkhouse-fayllv-stop-2kntk ns-ylzrl Stop clkhouse-fayllv ch-keeper,clickhouse Succeed 4/4 Sep 01,2025 12:32 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-stop-2kntk --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-stop-2kntk patched `kbcli cluster delete-ops --name clkhouse-fayllv-stop-2kntk --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-stop-2kntk deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start clkhouse-fayllv --force=true --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-start-wgnhm created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-fayllv-start-wgnhm -n ns-ylzrl check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-start-wgnhm ns-ylzrl Start clkhouse-fayllv Pending -/- Sep 01,2025 12:33 UTC+0800 check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse DoNotTerminate Updating Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Abnormal cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:33 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:33 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:33 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:33 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done check ops status `kbcli cluster list-ops clkhouse-fayllv --status all --namespace ns-ylzrl ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-fayllv-start-wgnhm ns-ylzrl Start clkhouse-fayllv ch-keeper,clickhouse Succeed 4/4 Sep 01,2025 12:33 UTC+0800 check ops status done ops_status:clkhouse-fayllv-start-wgnhm ns-ylzrl Start clkhouse-fayllv ch-keeper,clickhouse Succeed 4/4 Sep 01,2025 12:33 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests clkhouse-fayllv-start-wgnhm --namespace ns-ylzrl ` opsrequest.apps.kubeblocks.io/clkhouse-fayllv-start-wgnhm patched `kbcli cluster delete-ops --name clkhouse-fayllv-start-wgnhm --force --auto-approve --namespace ns-ylzrl ` OpsRequest clkhouse-fayllv-start-wgnhm deleted `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check db_client batch data count `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse --port 9000 --user admin --password "px4pt48rFuTURP3y" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check db_client batch data retry times: 1 check db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update clkhouse-fayllv --termination-policy=WipeOut --namespace ns-ylzrl ` cluster.apps.kubeblocks.io/clkhouse-fayllv updated check cluster status `kbcli cluster list clkhouse-fayllv --show-labels --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-fayllv ns-ylzrl clickhouse WipeOut Running Sep 01,2025 11:19 UTC+0800 app.kubernetes.io/instance=clkhouse-fayllv,clusterdefinition.kubeblocks.io/name=clickhouse,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances clkhouse-fayllv --namespace ns-ylzrl ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-fayllv-ch-keeper-0 ns-ylzrl clkhouse-fayllv ch-keeper Running leader 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:33 UTC+0800 clkhouse-fayllv-clickhouse-0 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:33 UTC+0800 clkhouse-fayllv-clickhouse-1 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 12:33 UTC+0800 clkhouse-fayllv-clickhouse-2 ns-ylzrl clkhouse-fayllv clickhouse Running 0 300m / 300m 1181116006400m / 1181116006400m data:21Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 12:33 UTC+0800 check pod status done `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-fayllv` set secret: clkhouse-fayllv-clickhouse-account-admin `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets clkhouse-fayllv-clickhouse-account-admin -o jsonpath="***.data.port***"` DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123 DB_USERNAME:admin;DB_PASSWORD:px4pt48rFuTURP3y;DB_PORT:9000;DB_DATABASE:default check cluster connect `echo 'clickhouse-client --host clkhouse-fayllv-clickhouse.ns-ylzrl.svc.cluster.local --port 9000 --user admin --password "px4pt48rFuTURP3y"' | kubectl exec -it clkhouse-fayllv-clickhouse-0 --namespace ns-ylzrl -- bash` check cluster connect done cluster list-logs `kbcli cluster list-logs clkhouse-fayllv --namespace ns-ylzrl ` No log files found. You can enable the log feature with the kbcli command below. kbcli cluster update clkhouse-fayllv --enable-all-logs=true --namespace ns-ylzrl Error from server (NotFound): pods "clkhouse-fayllv-clickhouse-0" not found cluster logs `kbcli cluster logs clkhouse-fayllv --tail 30 --namespace ns-ylzrl ` Defaulted container "clickhouse" out of: clickhouse, lorry, copy-tools (init), init-lorry (init) 10. DB::BackgroundSchedulePool::attachToThreadGroup() @ 0x14df3bc8 in /opt/bitnami/clickhouse/bin/clickhouse 11. DB::BackgroundSchedulePool::threadFunction() @ 0x14df3d0e in /opt/bitnami/clickhouse/bin/clickhouse 12. ? @ 0x14df4d50 in /opt/bitnami/clickhouse/bin/clickhouse 13. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xb44f9f7 in /opt/bitnami/clickhouse/bin/clickhouse 14. ? @ 0xb45357d in /opt/bitnami/clickhouse/bin/clickhouse 15. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 16. __clone @ 0xfca2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.3.18.37 (official build)) 2025.09.01 04:34:12.610625 [ 45 ] *** CertificateReloader: Cannot obtain modification time for certificate file /etc/clickhouse-server/server.crt, skipping update. errno: 2, strerror: No such file or directory 2025.09.01 04:34:12.610678 [ 45 ] *** CertificateReloader: Cannot obtain modification time for key file /etc/clickhouse-server/server.key, skipping update. errno: 2, strerror: No such file or directory 2025.09.01 04:34:12.610924 [ 45 ] *** CertificateReloader: Poco::Exception. Code: 1000, e.code() = 0, SSL context exception: Error loading private key from file /etc/clickhouse-server/server.key: error:02000002:system library:OPENSSL_internal:No such file or directory (version 22.3.18.37 (official build)) 2025.09.01 04:34:12.612421 [ 45 ] *** DNSCacheUpdater: Update period 15 seconds 2025.09.01 04:34:12.612487 [ 45 ] *** Application: Available RAM: 62.79 GiB; physical cores: 8; logical cores: 16. 2025.09.01 04:34:12.613177 [ 45 ] *** Application: Listening for http://0.0.0.0:8123 2025.09.01 04:34:12.613238 [ 45 ] *** Application: Listening for native protocol (tcp): 0.0.0.0:9000 2025.09.01 04:34:12.613318 [ 45 ] *** Application: Listening for replica communication (interserver): http://0.0.0.0:9009 2025.09.01 04:34:12.613382 [ 45 ] *** Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004 2025.09.01 04:34:12.613461 [ 45 ] *** Application: Listening for PostgreSQL compatibility protocol: 0.0.0.0:9005 2025.09.01 04:34:12.613526 [ 45 ] *** Application: Listening for Prometheus: http://0.0.0.0:8001 2025.09.01 04:34:12.613544 [ 45 ] *** Application: Ready for connections. 2025.09.01 04:34:16.529294 [ 47 ] *** KeeperTCPHandler: Requesting session ID for the new client 2025.09.01 04:34:16.535575 [ 47 ] *** KeeperTCPHandler: Received session ID 37 2025.09.01 04:34:27.625072 [ 48 ] *** KeeperTCPHandler: Requesting session ID for the new client 2025.09.01 04:34:27.630892 [ 48 ] *** KeeperTCPHandler: Received session ID 38 2025.09.01 04:34:28.568297 [ 49 ] *** KeeperTCPHandler: Requesting session ID for the new client 2025.09.01 04:34:28.573594 [ 49 ] *** KeeperTCPHandler: Received session ID 39 2025.09.01 04:34:42.125363 [ 75 ] *** KeeperDispatcher: Found dead session 36, will try to close it 2025.09.01 04:34:42.125430 [ 75 ] *** KeeperDispatcher: Dead session close request pushed 2025.09.01 04:34:42.125443 [ 75 ] *** KeeperDispatcher: Found dead session 30, will try to close it 2025.09.01 04:34:42.125450 [ 75 ] *** KeeperDispatcher: Dead session close request pushed delete cluster clkhouse-fayllv `kbcli cluster delete clkhouse-fayllv --auto-approve --namespace ns-ylzrl ` Cluster clkhouse-fayllv deleted pod_info:clkhouse-fayllv-ch-keeper-0 2/2 Running 0 92s clkhouse-fayllv-clickhouse-0 2/2 Running 0 92s clkhouse-fayllv-clickhouse-1 2/2 Running 2 (67s ago) 92s clkhouse-fayllv-clickhouse-2 2/2 Running 2 (68s ago) 92s No resources found in ns-ylzrl namespace. delete cluster pod done No resources found in ns-ylzrl namespace. check cluster resource non-exist OK: pvc No resources found in ns-ylzrl namespace. delete cluster done No resources found in ns-ylzrl namespace. No resources found in ns-ylzrl namespace. No resources found in ns-ylzrl namespace. Clickhouse Test Suite All Done! --------------------------------------Clickhouse (Topology = cluster Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[ClusterDefinition=clickhouse;]|[Description=Create a cluster with the specified cluster definition clickhouse] [PASSED]|[Connect]|[ComponentName=clickhouse]|[Description=Connect to the cluster] [PASSED]|[HorizontalScaling Out]|[ComponentName=clickhouse]|[Description=HorizontalScaling Out the cluster specify component clickhouse] [PASSED]|[HorizontalScaling In]|[ComponentName=clickhouse]|[Description=HorizontalScaling In the cluster specify component clickhouse] [PASSED]|[No-Failover]|[HA=OOM;Durations=2m;ComponentName=clickhouse]|[Description=Simulates conditions where pods experience OOM either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Memory load.] [PASSED]|[No-Failover]|[HA=Network Bandwidth;Durations=2m;ComponentName=clickhouse]|[Description=Simulates network bandwidth fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to bandwidth network.] [PASSED]|[No-Failover]|[HA=Time Offset;Durations=2m;ComponentName=clickhouse]|[Description=Simulates a time offset scenario thereby testing the application's resilience to potential slowness/unavailability of some replicas due to time offset.] [PASSED]|[No-Failover]|[HA=Network Duplicate;Durations=2m;ComponentName=clickhouse]|[Description=Simulates network duplicate fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to duplicate network.] [SKIPPED]|[No-Failover]|[HA=Evicting Pod;ComponentName=clickhouse]|[Description=Simulates conditions where pods evicting either due to node drained thereby testing the application's resilience to unavailability of some replicas due to evicting.] [PASSED]|[VolumeExpansion]|[ComponentName=ch-keeper]|[Description=VolumeExpansion the cluster specify component ch-keeper] [PASSED]|[Connect]|[Endpoints=true]|[Description=Connect to the cluster] [PASSED]|[No-Failover]|[HA=Network Delay;Durations=2m;ComponentName=clickhouse]|[Description=Simulates network delay fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to delay network.] [PASSED]|[VerticalScaling]|[ComponentName=clickhouse]|[Description=VerticalScaling the cluster specify component clickhouse] [PASSED]|[No-Failover]|[HA=DNS Random;Durations=2m;ComponentName=clickhouse]|[Description=Simulates conditions where pods experience random IP addresses being returned by the DNS service for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to the DNS service returning random IP addresses.] [PASSED]|[VolumeExpansion]|[ComponentName=clickhouse]|[Description=VolumeExpansion the cluster specify component clickhouse] [PASSED]|[No-Failover]|[HA=Network Corrupt;Durations=2m;ComponentName=clickhouse]|[Description=Simulates network corrupt fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to corrupt network.] [PASSED]|[No-Failover]|[HA=Pod Kill;ComponentName=clickhouse]|[Description=Simulates conditions where pods experience kill for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to kill.] [PASSED]|[VerticalScaling]|[ComponentName=ch-keeper]|[Description=VerticalScaling the cluster specify component ch-keeper] [PASSED]|[No-Failover]|[HA=Network Partition;Durations=2m;ComponentName=clickhouse]|[Description=Simulates network partition fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to partition network.] [PASSED]|[No-Failover]|[HA=Full CPU;Durations=2m;ComponentName=clickhouse]|[Description=Simulates conditions where pods experience CPU full either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high CPU load.] [PASSED]|[No-Failover]|[HA=DNS Error;Durations=2m;ComponentName=clickhouse]|[Description=Simulates conditions where pods experience DNS service errors for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to DNS service errors.] [PASSED]|[Restart]|[ComponentName=clickhouse]|[Description=Restart the cluster specify component clickhouse] [PASSED]|[No-Failover]|[HA=Kill 1;ComponentName=clickhouse]|[Description=Simulates conditions where process 1 killed either due to expected/undesired processes thereby testing the application's resilience to unavailability of some replicas due to abnormal termination signals.] [PASSED]|[No-Failover]|[HA=Connection Stress;ComponentName=clickhouse]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[No-Failover]|[HA=Network Loss;Durations=2m;ComponentName=clickhouse]|[Description=Simulates network loss fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to loss network.] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[No-Failover]|[HA=Pod Failure;Durations=2m;ComponentName=clickhouse]|[Description=Simulates conditions where pods experience failure for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to failure.] [PASSED]|[HscaleOfflineInstances]|[ComponentName=clickhouse]|[Description=Hscale the cluster instances offline specify component clickhouse] [PASSED]|[HscaleOnlineInstances]|[ComponentName=clickhouse]|[Description=Hscale the cluster instances online specify component clickhouse] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]