https://github.com/apecloud/apecloud-cd/actions/runs/21930219260 previous_version: kubeblocks_version:1.0.2 bash test/kbcli/test_kbcli_1.0.sh --type 29 --version 1.0.2 --service-version 22 --generate-output true --aws-access-key-id *** --aws-secret-access-key *** --jihulab-token *** --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files source kubedb files CLUSTER_NAME:  `kubectl get namespace | grep ns-nhoig `(B   `kubectl create namespace ns-nhoig`(B  namespace/ns-nhoig created create namespace ns-nhoig done(B download kbcli  `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)`(B   `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.2`(B  Your system is linux_amd64 Installing kbcli ... Downloading ... kbcli installed successfully. Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.2 done(B Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Kubernetes Env: v1.32.10 check snapshot controller check snapshot controller done(B POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default (B KubeBlocks version is:1.0.2 skip upgrade KubeBlocks(B current KubeBlocks version: 1.0.2 check component definition set component name:clickhouse set component version set component version:clickhouse set service versions:25.9.7,25.4.4,24.8.3,22.8.21,22.3.20,22.3.18 set service versions sorted:22.3.18,22.3.20,22.8.21,24.8.3,25.4.4,25.9.7 set clickhouse component definition set clickhouse component definition clickhouse-1.0.2 REPORT_COUNT 0:0 set replicas first:2,22.3.18|2,22.3.20|2,22.8.21|2,24.8.3|2,25.4.4|2,25.9.7 set replicas second max again:2,22.3.18 set replicas second max again:2,22.3.20 set replicas second max again:2,22.8.21 REPORT_COUNT 2:1 CLUSTER_TOPOLOGY:cluster cluster definition topology: standalone cluster topology cluster found in cluster definition clickhouse set clickhouse component definition set clickhouse component definition clickhouse-keeper-1.0.2 LIMIT_CPU:0.2 LIMIT_MEMORY:2 storage size: 20 CLUSTER_NAME:clkhouse-oxscub pod_info: termination_policy:WipeOut create 2 replica WipeOut clickhouse cluster check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: clickhouse-1.0.2 by component version:clickhouse apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: clkhouse-oxscub namespace: ns-nhoig spec: clusterDef: clickhouse topology: cluster terminationPolicy: WipeOut componentSpecs: - name: ch-keeper serviceVersion: 22.8.21 replicas: 3 disableExporter: false services: - name: default serviceType: ClusterIP systemAccounts: - name: admin passwordConfig: length: 10 numDigits: 5 numSymbols: 0 letterCase: MixedCases seed: clkhouse-oxscub resources: requests: cpu: 200m memory: 2Gi limits: cpu: 200m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi shardings: - name: clickhouse shards: 2 template: name: clickhouse serviceVersion: 22.8.21 env: - name: "INIT_CLUSTER_NAME" value: "default" replicas: 2 disableExporter: false services: - name: default serviceType: ClusterIP systemAccounts: - name: admin passwordConfig: length: 10 numDigits: 5 numSymbols: 0 letterCase: MixedCases seed: clkhouse-oxscub resources: requests: cpu: 200m memory: 2Gi limits: cpu: 200m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi  `kubectl apply -f test_create_clkhouse-oxscub.yaml`(B  cluster.apps.kubeblocks.io/clkhouse-oxscub created apply test_create_clkhouse-oxscub.yaml Success(B  `rm -rf test_create_clkhouse-oxscub.yaml`(B  check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Creating Feb 12,2026 10:05 UTC+0800 clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:05 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:05 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:05 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:08 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:08 UTC+0800 clkhouse-oxscub-clickhouse-vtg-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:08 UTC+0800 clkhouse-oxscub-clickhouse-vtg-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:08 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-vtg-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-vtg-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check pod clkhouse-oxscub-clickhouse-t2k-0 container_name clickhouse exist password 0H4Y3s2wR1(B check pod clkhouse-oxscub-clickhouse-t2k-0 container_name kbagent exist password 0H4Y3s2wR1(B No container logs contain secret password.(B describe cluster  `kbcli cluster describe clkhouse-oxscub --namespace ns-nhoig `(B  Name: clkhouse-oxscub Created Time: Feb 12,2026 10:05 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-nhoig clickhouse cluster Running WipeOut Endpoints: COMPONENT INTERNAL EXTERNAL ch-keeper clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:8123 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:8443 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:9000 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:9009 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:9010 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:8001 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:9181 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:9234 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:9281 clkhouse-oxscub-ch-keeper.ns-nhoig.svc.cluster.local:9440 clickhouse(clickhouse-t2k) clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:8001 clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:8123 clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:8443 clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:9000 clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:9004 clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:9005 clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:9009 clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:9010 clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local:9440 clickhouse(clickhouse-vtg) clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:8001 clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:8123 clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:8443 clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:9000 clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:9004 clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:9005 clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:9009 clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:9010 clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local:9440 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME ch-keeper 22.8.21 clkhouse-oxscub-ch-keeper-0 leader Running 0 aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:05 UTC+0800 ch-keeper 22.8.21 clkhouse-oxscub-ch-keeper-1 follower Running 0 aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:05 UTC+0800 ch-keeper 22.8.21 clkhouse-oxscub-ch-keeper-2 follower Running 0 aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:05 UTC+0800 clickhouse(clickhouse-t2k) 22.8.21 clkhouse-oxscub-clickhouse-t2k-0 Running 0 aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:08 UTC+0800 clickhouse(clickhouse-t2k) 22.8.21 clkhouse-oxscub-clickhouse-t2k-1 Running 0 aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:08 UTC+0800 clickhouse(clickhouse-vtg) 22.8.21 clkhouse-oxscub-clickhouse-vtg-0 Running 0 aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:08 UTC+0800 clickhouse(clickhouse-vtg) 22.8.21 clkhouse-oxscub-clickhouse-vtg-1 Running 0 aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:08 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS ch-keeper 200m / 200m 2Gi / 2Gi data:20Gi default clickhouse 200m / 200m 2Gi / 2Gi data:20Gi default Images: COMPONENT COMPONENT-DEFINITION IMAGE ch-keeper clickhouse-keeper-1.0.2 docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 clickhouse clickhouse-1.0.2 docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-nhoig clkhouse-oxscub get cluster clkhouse-oxscub shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-oxscub,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-nhoig`(B  set shard component name:clickhouse-t2k  `kbcli cluster label clkhouse-oxscub app.kubernetes.io/instance- --namespace ns-nhoig `(B  label "app.kubernetes.io/instance" not found.  `kbcli cluster label clkhouse-oxscub app.kubernetes.io/instance=clkhouse-oxscub --namespace ns-nhoig `(B   `kbcli cluster label clkhouse-oxscub --list --namespace ns-nhoig `(B  NAME NAMESPACE LABELS clkhouse-oxscub ns-nhoig app.kubernetes.io/instance=clkhouse-oxscub clusterdefinition.kubeblocks.io/name=clickhouse label cluster app.kubernetes.io/instance=clkhouse-oxscub Success(B  `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=clkhouse-oxscub --namespace ns-nhoig `(B   `kbcli cluster label clkhouse-oxscub --list --namespace ns-nhoig `(B  NAME NAMESPACE LABELS clkhouse-oxscub ns-nhoig app.kubernetes.io/instance=clkhouse-oxscub case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=clickhouse label cluster case.name=kbcli.test1 Success(B  `kbcli cluster label clkhouse-oxscub case.name=kbcli.test2 --overwrite --namespace ns-nhoig `(B   `kbcli cluster label clkhouse-oxscub --list --namespace ns-nhoig `(B  NAME NAMESPACE LABELS clkhouse-oxscub ns-nhoig app.kubernetes.io/instance=clkhouse-oxscub case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=clickhouse label cluster case.name=kbcli.test2 Success(B  `kbcli cluster label clkhouse-oxscub case.name- --namespace ns-nhoig `(B   `kbcli cluster label clkhouse-oxscub --list --namespace ns-nhoig `(B  NAME NAMESPACE LABELS clkhouse-oxscub ns-nhoig app.kubernetes.io/instance=clkhouse-oxscub clusterdefinition.kubeblocks.io/name=clickhouse delete cluster label case.name Success(B cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT * FROM system.clusters"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash `(B  default 1 1 1 clkhouse-oxscub-clickhouse-t2k-0.clkhouse-oxscub-clickhouse-t2k-headless.ns-nhoig.svc.cluster.local 10.244.2.148 9000 0 admin 0 0 0 default 1 1 2 clkhouse-oxscub-clickhouse-t2k-1.clkhouse-oxscub-clickhouse-t2k-headless.ns-nhoig.svc.cluster.local 10.244.4.243 9000 1 admin 0 0 0 default 2 1 1 clkhouse-oxscub-clickhouse-vtg-0.clkhouse-oxscub-clickhouse-vtg-headless.ns-nhoig.svc.cluster.local 10.244.3.240 9000 0 admin 0 0 0 default 2 1 2 clkhouse-oxscub-clickhouse-vtg-1.clkhouse-oxscub-clickhouse-vtg-headless.ns-nhoig.svc.cluster.local 10.244.2.233 9000 0 admin 0 0 0 connect cluster Success(B insert batch data by db client  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-executionloop-clkhouse-oxscub --namespace ns-nhoig `(B   `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-clkhouse-oxscub namespace: ns-nhoig spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local" - "--user" - "admin" - "--password" - "0H4Y3s2wR1" - "--port" - "8123" - "--dbtype" - "clickhouse" - "--test" - "executionloop" - "--duration" - "20" - "--interval" - "1" - "--cluster" - "default" restartPolicy: Never  `kubectl apply -f test-db-client-executionloop-clkhouse-oxscub.yaml`(B  pod/test-db-client-executionloop-clkhouse-oxscub created apply test-db-client-executionloop-clkhouse-oxscub.yaml Success(B  `rm -rf test-db-client-executionloop-clkhouse-oxscub.yaml`(B  check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 0/1 ContainerCreating 0 5s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 0/1 ContainerCreating 0 9s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 0/1 ContainerCreating 0 14s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 0/1 ContainerCreating 0 20s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 1/1 Running 0 25s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 1/1 Running 0 30s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 1/1 Running 0 35s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 1/1 Running 0 40s(B check pod test-db-client-executionloop-clkhouse-oxscub status done(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-oxscub 0/1 Completed 0 45s(B check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:05 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:05 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:05 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:08 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:08 UTC+0800 clkhouse-oxscub-clickhouse-vtg-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:08 UTC+0800 clkhouse-oxscub-clickhouse-vtg-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:08 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --user admin --password 0H4Y3s2wR1 --port 8123 --dbtype clickhouse --test executionloop --duration 20 --interval 1 --cluster default SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] 02:10:17.374 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** 02:10:17.376 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * This driver is DEPRECATED. Please use [com.clickhouse.jdbc.ClickHouseDriver] instead. * 02:10:17.376 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * Also everything in package [ru.yandex.clickhouse] will be removed starting from 0.4.0. * 02:10:17.377 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** Execution loop start: create database executions_loop CREATE DATABASE IF NOT EXISTS executions_loop ON CLUSTER default; drop distributed table executions_loop_table_distributed DROP TABLE IF EXISTS executions_loop.executions_loop_table_distributed ON CLUSTER default SYNC; drop table executions_loop_table DROP TABLE IF EXISTS executions_loop.executions_loop_table ON CLUSTER default SYNC; create table executions_loop_table CREATE TABLE IF NOT EXISTS executions_loop.executions_loop_table ON CLUSTER default (id UInt32, value String) ENGINE = ReplicatedMergeTree() ORDER BY id; create distributed table executions_loop_table_distributed CREATE TABLE IF NOT EXISTS executions_loop.executions_loop_table_distributed ON CLUSTER default AS executions_loop.executions_loop_table ENGINE = Distributed('default', 'executions_loop', 'executions_loop_table', rand()); Execution loop start:INSERT INTO executions_loop.executions_loop_table_distributed (id, value) VALUES (1, 'executions_loop_test_1'); [ 1s ] executions total: 1 successful: 1 failed: 0 disconnect: 0 [ 2s ] executions total: 20 successful: 20 failed: 0 disconnect: 0 [ 3s ] executions total: 36 successful: 36 failed: 0 disconnect: 0 [ 4s ] executions total: 48 successful: 48 failed: 0 disconnect: 0 [ 5s ] executions total: 64 successful: 64 failed: 0 disconnect: 0 [ 6s ] executions total: 81 successful: 81 failed: 0 disconnect: 0 [ 7s ] executions total: 100 successful: 100 failed: 0 disconnect: 0 [ 8s ] executions total: 113 successful: 113 failed: 0 disconnect: 0 [ 9s ] executions total: 128 successful: 128 failed: 0 disconnect: 0 [ 10s ] executions total: 144 successful: 144 failed: 0 disconnect: 0 [ 11s ] executions total: 158 successful: 158 failed: 0 disconnect: 0 [ 12s ] executions total: 174 successful: 174 failed: 0 disconnect: 0 [ 13s ] executions total: 186 successful: 186 failed: 0 disconnect: 0 [ 14s ] executions total: 205 successful: 205 failed: 0 disconnect: 0 [ 15s ] executions total: 215 successful: 215 failed: 0 disconnect: 0 [ 16s ] executions total: 228 successful: 228 failed: 0 disconnect: 0 [ 17s ] executions total: 246 successful: 246 failed: 0 disconnect: 0 [ 18s ] executions total: 263 successful: 263 failed: 0 disconnect: 0 [ 20s ] executions total: 269 successful: 269 failed: 0 disconnect: 0 Test Result: Total Executions: 269 Successful Executions: 269 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: clickhouse Host: clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local Port: 8123 Database: Table: User: admin Org: Access Mode: mysql Test Type: executionloop Query: Duration: 20 seconds Interval: 1 seconds Cluster: default DB_CLIENT_BATCH_DATA_COUNT: 269  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-executionloop-clkhouse-oxscub --namespace ns-nhoig `(B  pod/test-db-client-executionloop-clkhouse-oxscub patched (no change) pod "test-db-client-executionloop-clkhouse-oxscub" force deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B set db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  set db_client batch data Success(B cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart clkhouse-oxscub --auto-approve --force=true --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-restart-mdj4b created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-restart-mdj4b -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-restart-mdj4b ns-nhoig Restart clkhouse-oxscub ch-keeper,clickhouse Running 0/7 Feb 12,2026 10:10 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B  `kubectl get pods -l app.kubernetes.io/instance=clkhouse-oxscub -n ns-nhoig | (grep 'clkhouse-oxscub-ch-keeper' || true)`(B  pod "clkhouse-oxscub-ch-keeper-0" force deleted pod "clkhouse-oxscub-ch-keeper-1" force deleted pod "clkhouse-oxscub-ch-keeper-2" force deleted cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:11 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 clkhouse-oxscub-clickhouse-vtg-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-vtg-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-restart-mdj4b ns-nhoig Restart clkhouse-oxscub ch-keeper,clickhouse Succeed 7/7 Feb 12,2026 10:10 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-restart-mdj4b ns-nhoig Restart clkhouse-oxscub ch-keeper,clickhouse Succeed 7/7 Feb 12,2026 10:10 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-restart-mdj4b --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-restart-mdj4b patched  `kbcli cluster delete-ops --name clkhouse-oxscub-restart-mdj4b --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-restart-mdj4b deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [269] equal [269] data Success(B test failover (B check cluster status before cluster-failover- check cluster status done(B cluster_status:Running(B delete pod:clkhouse-oxscub-clickhouse-t2k-0  `kubectl delete pod clkhouse-oxscub-clickhouse-t2k-0 --force --namespace ns-nhoig `(B  pod "clkhouse-oxscub-clickhouse-t2k-0" force deleted check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 clkhouse-oxscub-clickhouse-vtg-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-vtg-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check failover pod name failover pod name:clkhouse-oxscub-clickhouse-t2k-0 failover Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [269] equal [269] data Success(B check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale clkhouse-oxscub --auto-approve --force=true --components ch-keeper --cpu 300m --memory 2.1Gi --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-verticalscaling-pvb5w created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-verticalscaling-pvb5w -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-verticalscaling-pvb5w ns-nhoig VerticalScaling clkhouse-oxscub ch-keeper Running -/- Feb 12,2026 10:17 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:17 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 clkhouse-oxscub-clickhouse-vtg-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-vtg-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-verticalscaling-pvb5w ns-nhoig VerticalScaling clkhouse-oxscub ch-keeper Succeed 3/3 Feb 12,2026 10:17 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-verticalscaling-pvb5w ns-nhoig VerticalScaling clkhouse-oxscub ch-keeper Succeed 3/3 Feb 12,2026 10:17 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-verticalscaling-pvb5w --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-verticalscaling-pvb5w patched  `kbcli cluster delete-ops --name clkhouse-oxscub-verticalscaling-pvb5w --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-verticalscaling-pvb5w deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [269] equal [269] data Success(B patch clkhouse-oxscub shards 3  `kubectl patch cluster clkhouse-oxscub --namespace ns-nhoig --type json -p '[{"op": "replace", "path": "/spec/shardings/0/shards", "value": '3'}]'`(B  cluster.apps.kubeblocks.io/clkhouse-oxscub patched get cluster clkhouse-oxscub shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-oxscub,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-nhoig`(B  set shard component name:clickhouse-vtg check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:17 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 clkhouse-oxscub-clickhouse-vtg-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-vtg-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-vtg-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-vtg-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B job pod status:(B job pod status:(B job pod status:(B check clkhouse-oxscub post-provision skip(B cluster custom-ops post-scale-out-shard-for-clickhouse apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-oxscub-custom- namespace: ns-nhoig spec: type: Custom clusterName: clkhouse-oxscub force: true custom: components: - componentName: clickhouse maxConcurrentComponents: 0 opsDefinitionName: post-scale-out-shard-for-clickhouse check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-oxscub.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-custom-58cx5 created create test_ops_cluster_clkhouse-oxscub.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-oxscub.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-custom-58cx5 ns-nhoig Custom clkhouse-oxscub clickhouse Running 0/1 Feb 12,2026 10:20 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:17 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 clkhouse-oxscub-clickhouse-vtg-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-vtg-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-vtg) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-vtg-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-vtg-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-custom-58cx5 ns-nhoig Custom clkhouse-oxscub clickhouse Running 0/1 Feb 12,2026 10:20 UTC+0800 ops_status:clkhouse-oxscub-custom-58cx5 ns-nhoig Custom clkhouse-oxscub clickhouse Running 0/1 Feb 12,2026 10:20 UTC+0800 (B ops_status:clkhouse-oxscub-custom-58cx5 ns-nhoig Custom clkhouse-oxscub clickhouse Running 0/1 Feb 12,2026 10:20 UTC+0800 (B ops_status:clkhouse-oxscub-custom-58cx5 ns-nhoig Custom clkhouse-oxscub clickhouse Running 0/1 Feb 12,2026 10:20 UTC+0800 (B check ops status done(B ops_status:clkhouse-oxscub-custom-58cx5 ns-nhoig Custom clkhouse-oxscub clickhouse Succeed 1/1 Feb 12,2026 10:20 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-custom-58cx5 --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-custom-58cx5 patched  `kbcli cluster delete-ops --name clkhouse-oxscub-custom-58cx5 --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-custom-58cx5 deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-vtg-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-vtg-0 --namespace ns-nhoig -- bash`(B  check db_client batch [269] equal [269] data Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-vtg-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-vtg-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-vtg.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-vtg-0 --namespace ns-nhoig -- bash`(B  check db_client batch [269] equal [269] data Success(B patch clkhouse-oxscub shards 2  `kubectl patch cluster clkhouse-oxscub --namespace ns-nhoig --type json -p '[{"op": "replace", "path": "/spec/shardings/0/shards", "value": '2'}]'`(B  cluster.apps.kubeblocks.io/clkhouse-oxscub patched get cluster clkhouse-oxscub shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-oxscub,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-nhoig`(B  set shard component name:clickhouse-t2k check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:17 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B job pod status:(B job pod status:(B job pod status:(B check clkhouse-oxscub pre-terminate skip(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B set db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  set db_client batch data retry times: 1(B set db_client batch data retry times: 2(B set db_client batch data retry times: 3(B set db_client batch data retry times: 4(B set db_client batch data retry times: 5(B set db_client batch data retry times: 6(B set db_client batch data retry times: 7(B set db_client batch data retry times: 8(B set db_client batch data retry times: 9(B set db_client batch data retry times: 10(B set db_client batch data Failure(B 141 set DB_CLIENT_BATCH_DATA_COUNT: 163  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B cmpv upgrade service version:2,22.3.18|2,22.3.20|2,22.8.21|2,24.8.3|2,25.4.4|2,25.9.7 cmpv service version upgrade upgrade from:22.8.21 to service version:22.3.18 upgrade from:22.3.18 to service version:22.3.20 cmpv downgrade service version:22.3.20|22.3.18 cmpv service version downgrade downgrade from:22.8.21 to service version:22.3.20 downgrade from:22.3.20 to service version:22.3.18 cmpv upgrade service version:2,22.3.18|2,22.3.20|2,22.8.21|2,24.8.3|2,25.4.4|2,25.9.7 cmpv service version upgrade upgrade from:22.8.21 to service version:22.3.18 upgrade from:22.3.18 to service version:22.3.20 cmpv downgrade service version:22.3.20|22.3.18 cmpv service version downgrade downgrade from:22.8.21 to service version:22.3.20 downgrade from:22.3.20 to service version:22.3.18  `kubectl get pvc -l app.kubernetes.io/instance=clkhouse-oxscub,apps.kubeblocks.io/component-name=clickhouse,apps.kubeblocks.io/vct-name=data --namespace ns-nhoig `(B  clkhouse-oxscub clickhouse data pvc is empty cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand clkhouse-oxscub --auto-approve --force=true --components clickhouse --volume-claim-templates data --storage 23Gi --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-volumeexpansion-nbqf6 created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-volumeexpansion-nbqf6 -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-volumeexpansion-nbqf6 ns-nhoig VolumeExpansion clkhouse-oxscub clickhouse Running 0/4 Feb 12,2026 10:22 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:17 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-volumeexpansion-nbqf6 ns-nhoig VolumeExpansion clkhouse-oxscub clickhouse Succeed 4/4 Feb 12,2026 10:22 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-volumeexpansion-nbqf6 ns-nhoig VolumeExpansion clkhouse-oxscub clickhouse Succeed 4/4 Feb 12,2026 10:22 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-volumeexpansion-nbqf6 --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-volumeexpansion-nbqf6 patched  `kbcli cluster delete-ops --name clkhouse-oxscub-volumeexpansion-nbqf6 --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-volumeexpansion-nbqf6 deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B test failover kill1(B check cluster status before cluster-failover-kill1 check cluster status done(B cluster_status:Running(B  `kill 1`(B  exec return message: check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:17 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:19 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:16 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:10 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check failover pod name failover pod name:clkhouse-oxscub-clickhouse-t2k-0 failover kill1 Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart clkhouse-oxscub --auto-approve --force=true --components clickhouse --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-restart-vdc9r created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-restart-vdc9r -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-restart-vdc9r ns-nhoig Restart clkhouse-oxscub clickhouse Running -/- Feb 12,2026 10:37 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:17 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:37 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:37 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:37 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 200m / 200m 2Gi / 2Gi data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:37 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-restart-vdc9r ns-nhoig Restart clkhouse-oxscub clickhouse Succeed 4/4 Feb 12,2026 10:37 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-restart-vdc9r ns-nhoig Restart clkhouse-oxscub clickhouse Succeed 4/4 Feb 12,2026 10:37 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-restart-vdc9r --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-restart-vdc9r patched  `kbcli cluster delete-ops --name clkhouse-oxscub-restart-vdc9r --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-restart-vdc9r deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale clkhouse-oxscub --auto-approve --force=true --components clickhouse --cpu 300m --memory 2.1Gi --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-verticalscaling-2kfsz created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-verticalscaling-2kfsz -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-verticalscaling-2kfsz ns-nhoig VerticalScaling clkhouse-oxscub clickhouse Running -/- Feb 12,2026 10:38 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:17 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:18 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:38 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:38 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:38 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:38 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-verticalscaling-2kfsz ns-nhoig VerticalScaling clkhouse-oxscub clickhouse Succeed 4/4 Feb 12,2026 10:38 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-verticalscaling-2kfsz ns-nhoig VerticalScaling clkhouse-oxscub clickhouse Succeed 4/4 Feb 12,2026 10:38 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-verticalscaling-2kfsz --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-verticalscaling-2kfsz patched  `kbcli cluster delete-ops --name clkhouse-oxscub-verticalscaling-2kfsz --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-verticalscaling-2kfsz deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart clkhouse-oxscub --auto-approve --force=true --components ch-keeper --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-restart-kl9jd created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-restart-kl9jd -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-restart-kl9jd ns-nhoig Restart clkhouse-oxscub ch-keeper Running 0/3 Feb 12,2026 10:39 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:40 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:40 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:39 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:38 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:38 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:38 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:38 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-restart-kl9jd ns-nhoig Restart clkhouse-oxscub ch-keeper Succeed 3/3 Feb 12,2026 10:39 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-restart-kl9jd ns-nhoig Restart clkhouse-oxscub ch-keeper Succeed 3/3 Feb 12,2026 10:39 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-restart-kl9jd --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-restart-kl9jd patched  `kbcli cluster delete-ops --name clkhouse-oxscub-restart-kl9jd --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-restart-kl9jd deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B cluster stop check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster stop clkhouse-oxscub --auto-approve --force=true --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-stop-dpqsm created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-stop-dpqsm -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-stop-dpqsm ns-nhoig Stop clkhouse-oxscub ch-keeper,clickhouse Running 0/7 Feb 12,2026 10:41 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Stopping Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Stopping(B cluster_status:Stopping(B check cluster status done(B cluster_status:Stopped(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-stop-dpqsm ns-nhoig Stop clkhouse-oxscub ch-keeper,clickhouse Succeed 7/7 Feb 12,2026 10:41 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-stop-dpqsm ns-nhoig Stop clkhouse-oxscub ch-keeper,clickhouse Succeed 7/7 Feb 12,2026 10:41 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-stop-dpqsm --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-stop-dpqsm patched  `kbcli cluster delete-ops --name clkhouse-oxscub-stop-dpqsm --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-stop-dpqsm deleted cluster start check cluster status before ops check cluster status done(B cluster_status:Stopped(B  `kbcli cluster start clkhouse-oxscub --force=true --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-start-dmhw2 created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-start-dmhw2 -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-start-dmhw2 ns-nhoig Start clkhouse-oxscub ch-keeper,clickhouse Running 0/7 Feb 12,2026 10:41 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-start-dmhw2 ns-nhoig Start clkhouse-oxscub ch-keeper,clickhouse Succeed 7/7 Feb 12,2026 10:41 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-start-dmhw2 ns-nhoig Start clkhouse-oxscub ch-keeper,clickhouse Succeed 7/7 Feb 12,2026 10:41 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-start-dmhw2 --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-start-dmhw2 patched  `kbcli cluster delete-ops --name clkhouse-oxscub-start-dmhw2 --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-start-dmhw2 deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B test failover connectionstress(B check cluster status before cluster-failover-connectionstress check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-connectionstress-clkhouse-oxscub --namespace ns-nhoig `(B   `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-clkhouse-oxscub namespace: ns-nhoig spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local" - "--user" - "admin" - "--password" - "0H4Y3s2wR1" - "--port" - "8123" - "--database" - "default" - "--dbtype" - "clickhouse" - "--test" - "connectionstress" - "--connections" - "4096" - "--duration" - "20" - "--cluster" - "default" restartPolicy: Never  `kubectl apply -f test-db-client-connectionstress-clkhouse-oxscub.yaml`(B  pod/test-db-client-connectionstress-clkhouse-oxscub created apply test-db-client-connectionstress-clkhouse-oxscub.yaml Success(B  `rm -rf test-db-client-connectionstress-clkhouse-oxscub.yaml`(B  check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-oxscub 1/1 Running 0 5s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-oxscub 1/1 Running 0 9s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-oxscub 1/1 Running 0 14s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-oxscub 1/1 Running 0 19s(B check pod test-db-client-connectionstress-clkhouse-oxscub status done(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-oxscub 0/1 Completed 0 24s(B check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --user admin --password 0H4Y3s2wR1 --port 8123 --database default --dbtype clickhouse --test connectionstress --connections 4096 --duration 20 --cluster default SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] 02:52:58.625 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** 02:52:58.627 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * This driver is DEPRECATED. Please use [com.clickhouse.jdbc.ClickHouseDriver] instead. * 02:52:58.627 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * Also everything in package [ru.yandex.clickhouse] will be removed starting from 0.4.0. * 02:52:58.627 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** Test Result: null Connection Information: Database Type: clickhouse Host: clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local Port: 8123 Database: default Table: User: admin Org: Access Mode: mysql Test Type: connectionstress Connection Count: 4096 Duration: 20 seconds  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-connectionstress-clkhouse-oxscub --namespace ns-nhoig `(B  pod/test-db-client-connectionstress-clkhouse-oxscub patched (no change) pod "test-db-client-connectionstress-clkhouse-oxscub" force deleted check failover pod name failover pod name:clkhouse-oxscub-clickhouse-t2k-0 failover connectionstress Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B cluster clickhouse scale-out cluster clickhouse scale-out replicas: 3 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-out clkhouse-oxscub --auto-approve --force=true --components clickhouse --replicas 1 --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-horizontalscaling-l42dl created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-horizontalscaling-l42dl -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-horizontalscaling-l42dl ns-nhoig HorizontalScaling clkhouse-oxscub clickhouse Running 0/2 Feb 12,2026 10:53 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-2 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:53 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-2 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:53 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-horizontalscaling-l42dl ns-nhoig HorizontalScaling clkhouse-oxscub clickhouse Succeed 2/2 Feb 12,2026 10:53 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-horizontalscaling-l42dl ns-nhoig HorizontalScaling clkhouse-oxscub clickhouse Succeed 2/2 Feb 12,2026 10:53 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-horizontalscaling-l42dl --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-horizontalscaling-l42dl patched  `kbcli cluster delete-ops --name clkhouse-oxscub-horizontalscaling-l42dl --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-horizontalscaling-l42dl deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B cluster clickhouse scale-in cluster clickhouse scale-in replicas: 2 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-in clkhouse-oxscub --auto-approve --force=true --components clickhouse --replicas 1 --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-horizontalscaling-wgfcp created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-horizontalscaling-wgfcp -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-horizontalscaling-wgfcp ns-nhoig HorizontalScaling clkhouse-oxscub clickhouse Running 0/2 Feb 12,2026 10:54 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-horizontalscaling-wgfcp ns-nhoig HorizontalScaling clkhouse-oxscub clickhouse Succeed 2/2 Feb 12,2026 10:54 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-horizontalscaling-wgfcp ns-nhoig HorizontalScaling clkhouse-oxscub clickhouse Succeed 2/2 Feb 12,2026 10:54 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-horizontalscaling-wgfcp --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-horizontalscaling-wgfcp patched  `kbcli cluster delete-ops --name clkhouse-oxscub-horizontalscaling-wgfcp --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-horizontalscaling-wgfcp deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "CREATE TABLE test_kbcli (id Int32,name String) ENGINE = MergeTree() ORDER BY id;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B   `echo "clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password \"0H4Y3s2wR1\" --query \"INSERT INTO test_kbcli VALUES (1,'rzfxc');\" " | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B   `echo "clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password \"0H4Y3s2wR1\" --query \"INSERT INTO test_kbcli VALUES (1,'rzfxc');\" " | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B   `clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT * FROM test_kbcli;"`(B  exec return msg:1 rzfxc check msg:[rzfxc] equal msg:[1 rzfxc](B  `kubectl get pvc -l app.kubernetes.io/instance=clkhouse-oxscub,apps.kubeblocks.io/component-name=ch-keeper,apps.kubeblocks.io/vct-name=data --namespace ns-nhoig `(B  cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand clkhouse-oxscub --auto-approve --force=true --components ch-keeper --volume-claim-templates data --storage 21Gi --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-volumeexpansion-pthdt created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-oxscub-volumeexpansion-pthdt -n ns-nhoig check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-volumeexpansion-pthdt ns-nhoig VolumeExpansion clkhouse-oxscub ch-keeper Running 0/3 Feb 12,2026 10:56 UTC+0800 check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-oxscub --status all --namespace ns-nhoig `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-oxscub-volumeexpansion-pthdt ns-nhoig VolumeExpansion clkhouse-oxscub ch-keeper Succeed 3/3 Feb 12,2026 10:56 UTC+0800 check ops status done(B ops_status:clkhouse-oxscub-volumeexpansion-pthdt ns-nhoig VolumeExpansion clkhouse-oxscub ch-keeper Succeed 3/3 Feb 12,2026 10:56 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-oxscub-volumeexpansion-pthdt --namespace ns-nhoig `(B  opsrequest.operations.kubeblocks.io/clkhouse-oxscub-volumeexpansion-pthdt patched  `kbcli cluster delete-ops --name clkhouse-oxscub-volumeexpansion-pthdt --force --auto-approve --namespace ns-nhoig `(B  OpsRequest clkhouse-oxscub-volumeexpansion-pthdt deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check db_client batch [163] equal [163] data Success(B cluster update terminationPolicy WipeOut  `kbcli cluster update clkhouse-oxscub --termination-policy=WipeOut --namespace ns-nhoig `(B  cluster.apps.kubeblocks.io/clkhouse-oxscub updated (no change) check cluster status  `kbcli cluster list clkhouse-oxscub --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub ns-nhoig clickhouse WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=clkhouse-oxscub,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-ch-keeper-0 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-1 ns-nhoig clkhouse-oxscub ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-ch-keeper-2 ns-nhoig clkhouse-oxscub ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-ql2-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-ql2) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-0 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:41 UTC+0800 clkhouse-oxscub-clickhouse-t2k-1 ns-nhoig clkhouse-oxscub clickhouse(clickhouse-t2k) Running 0 300m / 300m 2254857830400m / 2254857830400m data:23Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:41 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-oxscub`(B  set secret: clkhouse-oxscub-clickhouse-t2k-account-admin  `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-oxscub-clickhouse-t2k-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:0H4Y3s2wR1;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-oxscub-clickhouse-t2k.ns-nhoig.svc.cluster.local --port 9000 --user admin --password "0H4Y3s2wR1"' | kubectl exec -it clkhouse-oxscub-clickhouse-t2k-0 --namespace ns-nhoig -- bash`(B  check cluster connect done(B cluster full backup  `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.name}"`(B   `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.namespace}"`(B   `kubectl get secrets kb-backuprepo-njfcd -n kb-qqsns -o jsonpath="{.data.accessKeyId}"`(B   `kubectl get secrets kb-backuprepo-njfcd -n kb-qqsns -o jsonpath="{.data.secretAccessKey}"`(B  KUBEBLOCKS NAMESPACE:kb-qqsns get kubeblocks namespace done(B  `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-qqsns -o jsonpath="{.items[0].data.root-user}"`(B   `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-qqsns -o jsonpath="{.items[0].data.root-password}"`(B  minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-qqsns.svc.cluster.local:9000 list minio bucket kbcli-test  `echo 'mc alias set minioserver http://kbcli-test-minio.kb-qqsns.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-557dc6b665-8qwr5 --namespace kb-qqsns -- bash`(B  list minio bucket done(B default backuprepo:backuprepo-kbcli-test exists(B  `kbcli cluster backup clkhouse-oxscub --method full --namespace ns-nhoig `(B  Backup backup-ns-nhoig-clkhouse-oxscub-20260212110403 created successfully, you can view the progress: kbcli cluster list-backups --names=backup-ns-nhoig-clkhouse-oxscub-20260212110403 -n ns-nhoig check backup status  `kbcli cluster list-backups clkhouse-oxscub --namespace ns-nhoig `(B  NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-nhoig-clkhouse-oxscub-20260212110403 ns-nhoig clkhouse-oxscub full Delete Feb 12,2026 11:04 UTC+0800 backup_status:clkhouse-oxscub-full-Running(B backup_status:clkhouse-oxscub-full-Running(B backup_status:clkhouse-oxscub-full-Running(B backup_status:clkhouse-oxscub-full-Running(B backup_status:clkhouse-oxscub-full-Running(B backup_status:clkhouse-oxscub-full-Running(B backup_status:clkhouse-oxscub-full-Running(B check backup status done(B backup_status:backup-ns-nhoig-clkhouse-oxscub-20260212110403 ns-nhoig clkhouse-oxscub full Completed 20403 32s Delete Feb 12,2026 11:04 UTC+0800 Feb 12,2026 11:04 UTC+0800 (B cluster restore backup  `kbcli cluster describe-backup --names backup-ns-nhoig-clkhouse-oxscub-20260212110403 --namespace ns-nhoig `(B  Name: backup-ns-nhoig-clkhouse-oxscub-20260212110403 Cluster: clkhouse-oxscub Namespace: ns-nhoig Spec: Method: full Policy Name: clkhouse-oxscub-clickhouse-backup-policy Actions: dp-backup-clickhouse-ql2-0: ActionType: Job WorkloadName: dp-backup-clickhouse-ql2-0-backup-ns-nhoig-clkhouse-oxscub-2026 TargetPodName: clkhouse-oxscub-clickhouse-ql2-0 Phase: Completed Start Time: Feb 12,2026 11:04 UTC+0800 Completion Time: Feb 12,2026 11:04 UTC+0800 dp-backup-clickhouse-t2k-0: ActionType: Job WorkloadName: dp-backup-clickhouse-t2k-0-backup-ns-nhoig-clkhouse-oxscub-2026 TargetPodName: clkhouse-oxscub-clickhouse-t2k-0 Phase: Completed Start Time: Feb 12,2026 11:04 UTC+0800 Completion Time: Feb 12,2026 11:04 UTC+0800 Status: Phase: Completed Total Size: 20403 ActionSet Name: clickhouse-full-backup Repository: backuprepo-kbcli-test Duration: 32s Start Time: Feb 12,2026 11:04 UTC+0800 Completion Time: Feb 12,2026 11:04 UTC+0800 Path: /ns-nhoig/clkhouse-oxscub-8c350645-8d4e-4da4-bd4b-7ad4cefd9d09/clickhouse/backup-ns-nhoig-clkhouse-oxscub-20260212110403 Warning Events:  `kbcli cluster restore clkhouse-oxscub-backup --backup backup-ns-nhoig-clkhouse-oxscub-20260212110403 --namespace ns-nhoig `(B  Cluster clkhouse-oxscub-backup created check cluster status  `kbcli cluster list clkhouse-oxscub-backup --show-labels --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-oxscub-backup ns-nhoig clickhouse WipeOut Creating Feb 12,2026 11:04 UTC+0800 clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B  `kubectl get pods -l app.kubernetes.io/instance=clkhouse-oxscub-backup -n ns-nhoig | (grep 'clkhouse-oxscub-backup-ch-keeper' || true)`(B  pod "clkhouse-oxscub-backup-ch-keeper-0" force deleted pod "clkhouse-oxscub-backup-ch-keeper-1" force deleted pod "clkhouse-oxscub-backup-ch-keeper-2" force deleted cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B [Error] check cluster status timeout(B --------------------------------------get cluster clkhouse-oxscub-backup yaml--------------------------------------  `kubectl get cluster clkhouse-oxscub-backup -o yaml --namespace ns-nhoig `(B  apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: '[{"name":"clkhouse-oxscub-backup","type":"Restore"}]' kubeblocks.io/restore-from-backup: '{"clickhouse":{"doReadyRestoreAfterClusterRunning":"false","name":"backup-ns-nhoig-clkhouse-oxscub-20260212110403","namespace":"ns-nhoig","volumeRestorePolicy":"Parallel"}}' creationTimestamp: "2026-02-12T03:04:41Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 1 labels: clusterdefinition.kubeblocks.io/name: clickhouse name: clkhouse-oxscub-backup namespace: ns-nhoig resourceVersion: "85446" uid: d1e652e7-30c7-4c75-8520-216fd2e10eec spec: clusterDef: clickhouse componentSpecs: - annotations: kubeblocks.io/restart: "2026-02-12T02:39:19Z" componentDef: clickhouse-keeper-1.0.2 disableExporter: false name: ch-keeper podUpdatePolicy: PreferInPlace replicas: 3 resources: limits: cpu: 300m memory: 2254857830400m requests: cpu: 300m memory: 2254857830400m serviceVersion: 22.8.21 services: - name: default podService: false serviceType: ClusterIP systemAccounts: - disabled: false name: admin passwordConfig: length: 10 letterCase: MixedCases numDigits: 5 numSymbols: 0 seed: clkhouse-oxscub volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 21Gi shardings: - name: clickhouse shardingDef: clickhouse shards: 2 template: annotations: kubeblocks.io/restart: "2026-02-12T02:37:02Z" componentDef: clickhouse-1.0.2 disableExporter: false env: - name: INIT_CLUSTER_NAME value: default name: clickhouse podUpdatePolicy: PreferInPlace replicas: 2 resources: limits: cpu: 300m memory: 2254857830400m requests: cpu: 300m memory: 2254857830400m serviceVersion: 22.8.21 services: - name: default podService: false serviceType: ClusterIP systemAccounts: - disabled: false name: admin passwordConfig: length: 10 letterCase: MixedCases numDigits: 5 numSymbols: 0 seed: clkhouse-oxscub volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 23Gi terminationPolicy: WipeOut topology: cluster status: components: ch-keeper: observedGeneration: 1 phase: Creating upToDate: true conditions: - lastTransitionTime: "2026-02-12T03:04:41Z" message: 'The operator has started the provisioning of Cluster: clkhouse-oxscub-backup' observedGeneration: 1 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2026-02-12T03:04:41Z" message: Successfully applied for resources observedGeneration: 1 reason: ApplyResourcesSucceed status: "True" type: ApplyResources observedGeneration: 1 phase: Creating shardings: clickhouse: message: reason: the sharding to be created observedGeneration: 1 ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster clkhouse-oxscub-backup--------------------------------------  `kubectl describe cluster clkhouse-oxscub-backup --namespace ns-nhoig `(B  Name: clkhouse-oxscub-backup Namespace: ns-nhoig Labels: clusterdefinition.kubeblocks.io/name=clickhouse Annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: [{"name":"clkhouse-oxscub-backup","type":"Restore"}] kubeblocks.io/restore-from-backup: {"clickhouse":{"doReadyRestoreAfterClusterRunning":"false","name":"backup-ns-nhoig-clkhouse-oxscub-20260212110403","namespace":"ns-nhoig",... API Version: apps.kubeblocks.io/v1 Kind: Cluster Metadata: Creation Timestamp: 2026-02-12T03:04:41Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 1 Resource Version: 85446 UID: d1e652e7-30c7-4c75-8520-216fd2e10eec Spec: Cluster Def: clickhouse Component Specs: Annotations: kubeblocks.io/restart: 2026-02-12T02:39:19Z Component Def: clickhouse-keeper-1.0.2 Disable Exporter: false Name: ch-keeper Pod Update Policy: PreferInPlace Replicas: 3 Resources: Limits: Cpu: 300m Memory: 2254857830400m Requests: Cpu: 300m Memory: 2254857830400m Service Version: 22.8.21 Services: Name: default Pod Service: false Service Type: ClusterIP System Accounts: Disabled: false Name: admin Password Config: Length: 10 Letter Case: MixedCases Num Digits: 5 Num Symbols: 0 Seed: clkhouse-oxscub Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 21Gi Shardings: Name: clickhouse Sharding Def: clickhouse Shards: 2 Template: Annotations: kubeblocks.io/restart: 2026-02-12T02:37:02Z Component Def: clickhouse-1.0.2 Disable Exporter: false Env: Name: INIT_CLUSTER_NAME Value: default Name: clickhouse Pod Update Policy: PreferInPlace Replicas: 2 Resources: Limits: Cpu: 300m Memory: 2254857830400m Requests: Cpu: 300m Memory: 2254857830400m Service Version: 22.8.21 Services: Name: default Pod Service: false Service Type: ClusterIP System Accounts: Disabled: false Name: admin Password Config: Length: 10 Letter Case: MixedCases Num Digits: 5 Num Symbols: 0 Seed: clkhouse-oxscub Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 23Gi Termination Policy: WipeOut Topology: cluster Status: Components: Ch - Keeper: Observed Generation: 1 Phase: Creating Up To Date: true Conditions: Last Transition Time: 2026-02-12T03:04:41Z Message: The operator has started the provisioning of Cluster: clkhouse-oxscub-backup Observed Generation: 1 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2026-02-12T03:04:41Z Message: Successfully applied for resources Observed Generation: 1 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Observed Generation: 1 Phase: Creating Shardings: Clickhouse: Message: Reason: the sharding to be created Observed Generation: 1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal PreCheckSucceed 7m16s cluster-controller The operator has started the provisioning of Cluster: clkhouse-oxscub-backup Normal ApplyResourcesSucceed 7m16s cluster-controller Successfully applied for resources Normal ClusterComponentPhaseTransition 112s (x9 over 7m15s) cluster-controller cluster component ch-keeper is Creating Warning ReconcileBackupPolicyFail 111s (x17 over 7m16s) backup-policy-driver-controller failed to reconcile: sharding components clickhouse not found ------------------------------------------------------------------------------------------------------------------ get cluster clkhouse-oxscub-backup shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-oxscub-backup,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-nhoig`(B  no component name found(B check pod status  `kbcli cluster list-instances clkhouse-oxscub-backup --namespace ns-nhoig `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-oxscub-backup-ch-keeper-0 ns-nhoig clkhouse-oxscub-backup ch-keeper Init:0/3 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 11:09 UTC+0800 clkhouse-oxscub-backup-ch-keeper-1 ns-nhoig clkhouse-oxscub-backup ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 11:09 UTC+0800 clkhouse-oxscub-backup-ch-keeper-2 ns-nhoig clkhouse-oxscub-backup ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 11:09 UTC+0800 pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B pod_status:Init:0/3(B [Error] check pod status timeout(B --------------------------------------get cluster clkhouse-oxscub-backup yaml--------------------------------------  `kubectl get cluster clkhouse-oxscub-backup -o yaml --namespace ns-nhoig `(B  apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: '[{"name":"clkhouse-oxscub-backup","type":"Restore"}]' kubeblocks.io/restore-from-backup: '{"clickhouse":{"doReadyRestoreAfterClusterRunning":"false","name":"backup-ns-nhoig-clkhouse-oxscub-20260212110403","namespace":"ns-nhoig","volumeRestorePolicy":"Parallel"}}' creationTimestamp: "2026-02-12T03:04:41Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 1 labels: clusterdefinition.kubeblocks.io/name: clickhouse name: clkhouse-oxscub-backup namespace: ns-nhoig resourceVersion: "85446" uid: d1e652e7-30c7-4c75-8520-216fd2e10eec spec: clusterDef: clickhouse componentSpecs: - annotations: kubeblocks.io/restart: "2026-02-12T02:39:19Z" componentDef: clickhouse-keeper-1.0.2 disableExporter: false name: ch-keeper podUpdatePolicy: PreferInPlace replicas: 3 resources: limits: cpu: 300m memory: 2254857830400m requests: cpu: 300m memory: 2254857830400m serviceVersion: 22.8.21 services: - name: default podService: false serviceType: ClusterIP systemAccounts: - disabled: false name: admin passwordConfig: length: 10 letterCase: MixedCases numDigits: 5 numSymbols: 0 seed: clkhouse-oxscub volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 21Gi shardings: - name: clickhouse shardingDef: clickhouse shards: 2 template: annotations: kubeblocks.io/restart: "2026-02-12T02:37:02Z" componentDef: clickhouse-1.0.2 disableExporter: false env: - name: INIT_CLUSTER_NAME value: default name: clickhouse podUpdatePolicy: PreferInPlace replicas: 2 resources: limits: cpu: 300m memory: 2254857830400m requests: cpu: 300m memory: 2254857830400m serviceVersion: 22.8.21 services: - name: default podService: false serviceType: ClusterIP systemAccounts: - disabled: false name: admin passwordConfig: length: 10 letterCase: MixedCases numDigits: 5 numSymbols: 0 seed: clkhouse-oxscub volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 23Gi terminationPolicy: WipeOut topology: cluster status: components: ch-keeper: observedGeneration: 1 phase: Creating upToDate: true conditions: - lastTransitionTime: "2026-02-12T03:04:41Z" message: 'The operator has started the provisioning of Cluster: clkhouse-oxscub-backup' observedGeneration: 1 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2026-02-12T03:04:41Z" message: Successfully applied for resources observedGeneration: 1 reason: ApplyResourcesSucceed status: "True" type: ApplyResources observedGeneration: 1 phase: Creating shardings: clickhouse: message: reason: the sharding to be created observedGeneration: 1 ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster clkhouse-oxscub-backup--------------------------------------  `kubectl describe cluster clkhouse-oxscub-backup --namespace ns-nhoig `(B  Name: clkhouse-oxscub-backup Namespace: ns-nhoig Labels: clusterdefinition.kubeblocks.io/name=clickhouse Annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: [{"name":"clkhouse-oxscub-backup","type":"Restore"}] kubeblocks.io/restore-from-backup: {"clickhouse":{"doReadyRestoreAfterClusterRunning":"false","name":"backup-ns-nhoig-clkhouse-oxscub-20260212110403","namespace":"ns-nhoig",... API Version: apps.kubeblocks.io/v1 Kind: Cluster Metadata: Creation Timestamp: 2026-02-12T03:04:41Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 1 Resource Version: 85446 UID: d1e652e7-30c7-4c75-8520-216fd2e10eec Spec: Cluster Def: clickhouse Component Specs: Annotations: kubeblocks.io/restart: 2026-02-12T02:39:19Z Component Def: clickhouse-keeper-1.0.2 Disable Exporter: false Name: ch-keeper Pod Update Policy: PreferInPlace Replicas: 3 Resources: Limits: Cpu: 300m Memory: 2254857830400m Requests: Cpu: 300m Memory: 2254857830400m Service Version: 22.8.21 Services: Name: default Pod Service: false Service Type: ClusterIP System Accounts: Disabled: false Name: admin Password Config: Length: 10 Letter Case: MixedCases Num Digits: 5 Num Symbols: 0 Seed: clkhouse-oxscub Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 21Gi Shardings: Name: clickhouse Sharding Def: clickhouse Shards: 2 Template: Annotations: kubeblocks.io/restart: 2026-02-12T02:37:02Z Component Def: clickhouse-1.0.2 Disable Exporter: false Env: Name: INIT_CLUSTER_NAME Value: default Name: clickhouse Pod Update Policy: PreferInPlace Replicas: 2 Resources: Limits: Cpu: 300m Memory: 2254857830400m Requests: Cpu: 300m Memory: 2254857830400m Service Version: 22.8.21 Services: Name: default Pod Service: false Service Type: ClusterIP System Accounts: Disabled: false Name: admin Password Config: Length: 10 Letter Case: MixedCases Num Digits: 5 Num Symbols: 0 Seed: clkhouse-oxscub Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 23Gi Termination Policy: WipeOut Topology: cluster Status: Components: Ch - Keeper: Observed Generation: 1 Phase: Creating Up To Date: true Conditions: Last Transition Time: 2026-02-12T03:04:41Z Message: The operator has started the provisioning of Cluster: clkhouse-oxscub-backup Observed Generation: 1 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2026-02-12T03:04:41Z Message: Successfully applied for resources Observed Generation: 1 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Observed Generation: 1 Phase: Creating Shardings: Clickhouse: Message: Reason: the sharding to be created Observed Generation: 1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal PreCheckSucceed 14m cluster-controller The operator has started the provisioning of Cluster: clkhouse-oxscub-backup Normal ApplyResourcesSucceed 14m cluster-controller Successfully applied for resources Normal ClusterComponentPhaseTransition 4m7s (x10 over 14m) cluster-controller cluster component ch-keeper is Creating Warning ReconcileBackupPolicyFail 4m6s (x18 over 14m) backup-policy-driver-controller failed to reconcile: sharding components clickhouse not found ------------------------------------------------------------------------------------------------------------------ --------------------------------------get pod clkhouse-oxscub-backup-ch-keeper-0 clkhouse-oxscub-backup-ch-keeper-1 clkhouse-oxscub-backup-ch-keeper-2 yaml--------------------------------------  `kubectl get pod clkhouse-oxscub-backup-ch-keeper-0 -o yaml --namespace ns-nhoig `(B  apiVersion: v1 kind: Pod metadata: annotations: kubeblocks.io/restart: "2026-02-12T02:39:19Z" creationTimestamp: "2026-02-12T03:09:51Z" labels: app.kubernetes.io/component: clickhouse-keeper-1.0.2 app.kubernetes.io/instance: clkhouse-oxscub-backup app.kubernetes.io/managed-by: kubeblocks apps.kubeblocks.io/component-name: ch-keeper apps.kubeblocks.io/pod-name: clkhouse-oxscub-backup-ch-keeper-0 apps.kubeblocks.io/release-phase: stable apps.kubeblocks.io/service-version: 22.8.21 controller-revision-hash: 7f8d889464 workloads.kubeblocks.io/instance: clkhouse-oxscub-backup-ch-keeper workloads.kubeblocks.io/managed-by: InstanceSet name: clkhouse-oxscub-backup-ch-keeper-0 namespace: ns-nhoig ownerReferences: - apiVersion: workloads.kubeblocks.io/v1 blockOwnerDeletion: true controller: true kind: InstanceSet name: clkhouse-oxscub-backup-ch-keeper uid: da399833-728b-422e-9b74-0170618a5861 resourceVersion: "89524" uid: e907c324-0dbb-4ebe-8282-fd3d41ca8c6a spec: containers: - command: - bash - -xc - | /scripts/bootstrap-keeper.sh env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: clickhouse ports: - containerPort: 8123 name: http protocol: TCP - containerPort: 8443 name: https protocol: TCP - containerPort: 9000 name: tcp protocol: TCP - containerPort: 9009 name: http-intersrv protocol: TCP - containerPort: 9010 name: https-intersrv protocol: TCP - containerPort: 9440 name: tcp-secure protocol: TCP - containerPort: 8001 name: http-metrics protocol: TCP - containerPort: 9181 name: chk-tcp protocol: TCP - containerPort: 9234 name: chk-raft protocol: TCP - containerPort: 9281 name: chk-tcp-tls protocol: TCP - containerPort: 9444 name: chk-raft-tls protocol: TCP resources: limits: cpu: 300m memory: 2254857830400m requests: cpu: 300m memory: 2254857830400m securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true - args: - --port - "3501" - --streaming-port - "3502" command: - /kubeblocks/kbagent env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin - name: KB_AGENT_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: KB_AGENT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KB_AGENT_POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: KB_AGENT_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: KB_AGENT_ACTION value: '[{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}]' - name: KB_AGENT_PROBE value: '[{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}]' envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: kbagent ports: - containerPort: 3501 name: http protocol: TCP - containerPort: 3502 name: streaming protocol: TCP resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: runAsGroup: 1000 startupProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 3501 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: clkhouse-oxscub-backup-ch-keeper-0 initContainers: - command: - sh - -c - | cp /bin/nc /shared-tools/ chmod +x /shared-tools/nc env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl imagePullPolicy: IfNotPresent name: copy-tools resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true - command: - cp - -r - /bin/kbagent - /kubeblocks/ env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/kubeblocks-tools:1.0.2 imagePullPolicy: IfNotPresent name: init-kbagent resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true - args: - --server=false command: - /kubeblocks/kbagent env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin - name: KB_AGENT_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: KB_AGENT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KB_AGENT_POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: KB_AGENT_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: KB_AGENT_ACTION value: '[{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}]' - name: KB_AGENT_PROBE value: '[{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}]' envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: kbagent-worker resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: runAsGroup: 1000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true nodeName: aks-cicdamdpool-17242166-vmss000003 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 0 seccompProfile: type: RuntimeDefault serviceAccount: kb-clickhouse-keeper-1.0.2 serviceAccountName: kb-clickhouse-keeper-1.0.2 subdomain: clkhouse-oxscub-backup-ch-keeper-headless terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: shared-tools - name: data persistentVolumeClaim: claimName: data-clkhouse-oxscub-backup-ch-keeper-0 - emptyDir: {} name: kubeblocks - configMap: defaultMode: 292 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-keeper-tpl name: config - configMap: defaultMode: 292 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-client-tpl name: client-config - configMap: defaultMode: 365 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-scripts name: scripts - name: kube-api-access-mmk45 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:51Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:51Z" message: 'containers with incomplete status: [copy-tools init-kbagent kbagent-worker]' reason: ContainersNotInitialized status: "False" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:51Z" message: 'containers with unready status: [clickhouse kbagent]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:51Z" message: 'containers with unready status: [clickhouse kbagent]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:51Z" status: "True" type: PodScheduled containerStatuses: - image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: "" lastState: {} name: clickhouse ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true recursiveReadOnly: Disabled - image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: "" lastState: {} name: kbagent ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true recursiveReadOnly: Disabled hostIP: 10.224.0.6 hostIPs: - ip: 10.224.0.6 initContainerStatuses: - image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl imageID: "" lastState: {} name: copy-tools ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true recursiveReadOnly: Disabled - image: docker.io/apecloud/kubeblocks-tools:1.0.2 imageID: "" lastState: {} name: init-kbagent ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true recursiveReadOnly: Disabled - image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: "" lastState: {} name: kbagent-worker ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-mmk45 readOnly: true recursiveReadOnly: Disabled phase: Pending qosClass: Burstable startTime: "2026-02-12T03:09:51Z" ------------------------------------------------------------------------------------------------------------------  `kubectl get pod clkhouse-oxscub-backup-ch-keeper-1 -o yaml --namespace ns-nhoig `(B  apiVersion: v1 kind: Pod metadata: annotations: apps.kubeblocks.io/last-role-snapshot-version: "1770865813441552" kubeblocks.io/restart: "2026-02-12T02:39:19Z" creationTimestamp: "2026-02-12T03:09:51Z" labels: app.kubernetes.io/component: clickhouse-keeper-1.0.2 app.kubernetes.io/instance: clkhouse-oxscub-backup app.kubernetes.io/managed-by: kubeblocks apps.kubeblocks.io/component-name: ch-keeper apps.kubeblocks.io/pod-name: clkhouse-oxscub-backup-ch-keeper-1 apps.kubeblocks.io/release-phase: stable apps.kubeblocks.io/service-version: 22.8.21 controller-revision-hash: 7f8d889464 kubeblocks.io/role: leader workloads.kubeblocks.io/instance: clkhouse-oxscub-backup-ch-keeper workloads.kubeblocks.io/managed-by: InstanceSet name: clkhouse-oxscub-backup-ch-keeper-1 namespace: ns-nhoig ownerReferences: - apiVersion: workloads.kubeblocks.io/v1 blockOwnerDeletion: true controller: true kind: InstanceSet name: clkhouse-oxscub-backup-ch-keeper uid: da399833-728b-422e-9b74-0170618a5861 resourceVersion: "90020" uid: b00cd229-136f-473d-a0fb-b126bdf1c4f7 spec: containers: - command: - bash - -xc - | /scripts/bootstrap-keeper.sh env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: clickhouse ports: - containerPort: 8123 name: http protocol: TCP - containerPort: 8443 name: https protocol: TCP - containerPort: 9000 name: tcp protocol: TCP - containerPort: 9009 name: http-intersrv protocol: TCP - containerPort: 9010 name: https-intersrv protocol: TCP - containerPort: 9440 name: tcp-secure protocol: TCP - containerPort: 8001 name: http-metrics protocol: TCP - containerPort: 9181 name: chk-tcp protocol: TCP - containerPort: 9234 name: chk-raft protocol: TCP - containerPort: 9281 name: chk-tcp-tls protocol: TCP - containerPort: 9444 name: chk-raft-tls protocol: TCP resources: limits: cpu: 300m memory: 2254857830400m requests: cpu: 300m memory: 2254857830400m securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true - args: - --port - "3501" - --streaming-port - "3502" command: - /kubeblocks/kbagent env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin - name: KB_AGENT_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: KB_AGENT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KB_AGENT_POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: KB_AGENT_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: KB_AGENT_ACTION value: '[{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}]' - name: KB_AGENT_PROBE value: '[{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}]' envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: kbagent ports: - containerPort: 3501 name: http protocol: TCP - containerPort: 3502 name: streaming protocol: TCP resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: runAsGroup: 1000 startupProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 3501 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: clkhouse-oxscub-backup-ch-keeper-1 initContainers: - command: - sh - -c - | cp /bin/nc /shared-tools/ chmod +x /shared-tools/nc env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl imagePullPolicy: IfNotPresent name: copy-tools resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true - command: - cp - -r - /bin/kbagent - /kubeblocks/ env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/kubeblocks-tools:1.0.2 imagePullPolicy: IfNotPresent name: init-kbagent resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true - args: - --server=false command: - /kubeblocks/kbagent env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin - name: KB_AGENT_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: KB_AGENT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KB_AGENT_POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: KB_AGENT_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: KB_AGENT_ACTION value: '[{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}]' - name: KB_AGENT_PROBE value: '[{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}]' envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: kbagent-worker resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: runAsGroup: 1000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true nodeName: aks-cicdamdpool-17242166-vmss000000 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 0 seccompProfile: type: RuntimeDefault serviceAccount: kb-clickhouse-keeper-1.0.2 serviceAccountName: kb-clickhouse-keeper-1.0.2 subdomain: clkhouse-oxscub-backup-ch-keeper-headless terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: shared-tools - name: data persistentVolumeClaim: claimName: data-clkhouse-oxscub-backup-ch-keeper-1 - emptyDir: {} name: kubeblocks - configMap: defaultMode: 292 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-keeper-tpl name: config - configMap: defaultMode: 292 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-client-tpl name: client-config - configMap: defaultMode: 365 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-scripts name: scripts - name: kube-api-access-vn7sv projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:54Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:58Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-02-12T03:10:03Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-02-12T03:10:03Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:51Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://2923216b80797588bb0751ee69b87d6d450c09008602586d1e342bd72c599b01 image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb lastState: {} name: clickhouse ready: true restartCount: 0 started: true state: running: startedAt: "2026-02-12T03:09:58Z" volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true recursiveReadOnly: Disabled - containerID: containerd://9dc574a554f987448ddfe1c6f54986130ccbb422b4a32888b70e6e652b29447d image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb lastState: {} name: kbagent ready: true restartCount: 0 started: true state: running: startedAt: "2026-02-12T03:09:58Z" volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true recursiveReadOnly: Disabled hostIP: 10.224.0.9 hostIPs: - ip: 10.224.0.9 initContainerStatuses: - containerID: containerd://9634c2637729b8fb32b2aa3753b786a713f0702663fb391c2805ace1b9dbd0e3 image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl imageID: docker.io/apecloud/bash-busybox@sha256:0af829c4a29058ccc2c41f6fbf3b50b83b290dd02d7f164fff60bb7429da8e5a lastState: {} name: copy-tools ready: true restartCount: 0 started: false state: terminated: containerID: containerd://9634c2637729b8fb32b2aa3753b786a713f0702663fb391c2805ace1b9dbd0e3 exitCode: 0 finishedAt: "2026-02-12T03:09:53Z" reason: Completed startedAt: "2026-02-12T03:09:53Z" volumeMounts: - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true recursiveReadOnly: Disabled - containerID: containerd://6d1e6eb51de335352a550543b4c43974108e3d629d5a959001e707d485122858 image: docker.io/apecloud/kubeblocks-tools:1.0.2 imageID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea lastState: {} name: init-kbagent ready: true restartCount: 0 started: false state: terminated: containerID: containerd://6d1e6eb51de335352a550543b4c43974108e3d629d5a959001e707d485122858 exitCode: 0 finishedAt: "2026-02-12T03:09:55Z" reason: Completed startedAt: "2026-02-12T03:09:55Z" volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true recursiveReadOnly: Disabled - containerID: containerd://b9ff453005541edb799dab39aefc66391143a8c8415cf5d10c0e7d46f2d2f19d image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb lastState: {} name: kbagent-worker ready: true restartCount: 0 started: false state: terminated: containerID: containerd://b9ff453005541edb799dab39aefc66391143a8c8415cf5d10c0e7d46f2d2f19d exitCode: 0 finishedAt: "2026-02-12T03:09:57Z" reason: Completed startedAt: "2026-02-12T03:09:57Z" volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vn7sv readOnly: true recursiveReadOnly: Disabled phase: Running podIP: 10.244.4.239 podIPs: - ip: 10.244.4.239 qosClass: Burstable startTime: "2026-02-12T03:09:51Z" ------------------------------------------------------------------------------------------------------------------  `kubectl get pod clkhouse-oxscub-backup-ch-keeper-2 -o yaml --namespace ns-nhoig `(B  apiVersion: v1 kind: Pod metadata: annotations: apps.kubeblocks.io/last-role-snapshot-version: "1770865811720123" kubeblocks.io/restart: "2026-02-12T02:39:19Z" creationTimestamp: "2026-02-12T03:09:52Z" labels: app.kubernetes.io/component: clickhouse-keeper-1.0.2 app.kubernetes.io/instance: clkhouse-oxscub-backup app.kubernetes.io/managed-by: kubeblocks apps.kubeblocks.io/component-name: ch-keeper apps.kubeblocks.io/pod-name: clkhouse-oxscub-backup-ch-keeper-2 apps.kubeblocks.io/release-phase: stable apps.kubeblocks.io/service-version: 22.8.21 controller-revision-hash: 7f8d889464 kubeblocks.io/role: follower workloads.kubeblocks.io/instance: clkhouse-oxscub-backup-ch-keeper workloads.kubeblocks.io/managed-by: InstanceSet name: clkhouse-oxscub-backup-ch-keeper-2 namespace: ns-nhoig ownerReferences: - apiVersion: workloads.kubeblocks.io/v1 blockOwnerDeletion: true controller: true kind: InstanceSet name: clkhouse-oxscub-backup-ch-keeper uid: da399833-728b-422e-9b74-0170618a5861 resourceVersion: "89987" uid: d7a44479-13a2-4794-aafd-9fbd982b32d1 spec: containers: - command: - bash - -xc - | /scripts/bootstrap-keeper.sh env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: clickhouse ports: - containerPort: 8123 name: http protocol: TCP - containerPort: 8443 name: https protocol: TCP - containerPort: 9000 name: tcp protocol: TCP - containerPort: 9009 name: http-intersrv protocol: TCP - containerPort: 9010 name: https-intersrv protocol: TCP - containerPort: 9440 name: tcp-secure protocol: TCP - containerPort: 8001 name: http-metrics protocol: TCP - containerPort: 9181 name: chk-tcp protocol: TCP - containerPort: 9234 name: chk-raft protocol: TCP - containerPort: 9281 name: chk-tcp-tls protocol: TCP - containerPort: 9444 name: chk-raft-tls protocol: TCP resources: limits: cpu: 300m memory: 2254857830400m requests: cpu: 300m memory: 2254857830400m securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true - args: - --port - "3501" - --streaming-port - "3502" command: - /kubeblocks/kbagent env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin - name: KB_AGENT_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: KB_AGENT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KB_AGENT_POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: KB_AGENT_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: KB_AGENT_ACTION value: '[{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}]' - name: KB_AGENT_PROBE value: '[{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}]' envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: kbagent ports: - containerPort: 3501 name: http protocol: TCP - containerPort: 3502 name: streaming protocol: TCP resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: runAsGroup: 1000 startupProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 3501 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: clkhouse-oxscub-backup-ch-keeper-2 initContainers: - command: - sh - -c - | cp /bin/nc /shared-tools/ chmod +x /shared-tools/nc env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl imagePullPolicy: IfNotPresent name: copy-tools resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true - command: - cp - -r - /bin/kbagent - /kubeblocks/ env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/kubeblocks-tools:1.0.2 imagePullPolicy: IfNotPresent name: init-kbagent resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true - args: - --server=false command: - /kubeblocks/kbagent env: - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-oxscub-backup-ch-keeper-account-admin - name: KB_AGENT_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: KB_AGENT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KB_AGENT_POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: KB_AGENT_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: KB_AGENT_ACTION value: '[{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}]' - name: KB_AGENT_PROBE value: '[{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}]' envFrom: - configMapRef: name: clkhouse-oxscub-backup-ch-keeper-env optional: false image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imagePullPolicy: IfNotPresent name: kbagent-worker resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: runAsGroup: 1000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true nodeName: aks-cicdamdpool-17242166-vmss000001 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 0 seccompProfile: type: RuntimeDefault serviceAccount: kb-clickhouse-keeper-1.0.2 serviceAccountName: kb-clickhouse-keeper-1.0.2 subdomain: clkhouse-oxscub-backup-ch-keeper-headless terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: shared-tools - name: data persistentVolumeClaim: claimName: data-clkhouse-oxscub-backup-ch-keeper-2 - emptyDir: {} name: kubeblocks - configMap: defaultMode: 292 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-keeper-tpl name: config - configMap: defaultMode: 292 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-client-tpl name: client-config - configMap: defaultMode: 365 name: clkhouse-oxscub-backup-ch-keeper-clickhouse-scripts name: scripts - name: kube-api-access-vjdm2 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:54Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:56Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-02-12T03:10:03Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-02-12T03:10:03Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-02-12T03:09:52Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://ca1eba8752e7e7bffb528694d0996079e79e40593ebae9dc85dbff9432b5f992 image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb lastState: {} name: clickhouse ready: true restartCount: 0 started: true state: running: startedAt: "2026-02-12T03:09:56Z" volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true recursiveReadOnly: Disabled - containerID: containerd://9b0f4c2e8fab669243d957da1fcf72b15c3f1f650e79ec1a117bea8efe045596 image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb lastState: {} name: kbagent ready: true restartCount: 0 started: true state: running: startedAt: "2026-02-12T03:09:56Z" volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true recursiveReadOnly: Disabled hostIP: 10.224.0.8 hostIPs: - ip: 10.224.0.8 initContainerStatuses: - containerID: containerd://0b07dfb914f7038ad4eeb79e3592db2c8c066f630577f93df62a50b2d0d6fe54 image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl imageID: docker.io/apecloud/bash-busybox@sha256:0af829c4a29058ccc2c41f6fbf3b50b83b290dd02d7f164fff60bb7429da8e5a lastState: {} name: copy-tools ready: true restartCount: 0 started: false state: terminated: containerID: containerd://0b07dfb914f7038ad4eeb79e3592db2c8c066f630577f93df62a50b2d0d6fe54 exitCode: 0 finishedAt: "2026-02-12T03:09:53Z" reason: Completed startedAt: "2026-02-12T03:09:53Z" volumeMounts: - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true recursiveReadOnly: Disabled - containerID: containerd://b3e92ff8d4b0207fcf7a9ff4a7df3a1002d71167ed834cc29029ecbf88ba4b9b image: docker.io/apecloud/kubeblocks-tools:1.0.2 imageID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea lastState: {} name: init-kbagent ready: true restartCount: 0 started: false state: terminated: containerID: containerd://b3e92ff8d4b0207fcf7a9ff4a7df3a1002d71167ed834cc29029ecbf88ba4b9b exitCode: 0 finishedAt: "2026-02-12T03:09:54Z" reason: Completed startedAt: "2026-02-12T03:09:54Z" volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true recursiveReadOnly: Disabled - containerID: containerd://f54f468ce9cc9e2516fcff943a855d23587df8fbb772b8334ab5f38dbc8bdd04 image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 imageID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb lastState: {} name: kbagent-worker ready: true restartCount: 0 started: false state: terminated: containerID: containerd://f54f468ce9cc9e2516fcff943a855d23587df8fbb772b8334ab5f38dbc8bdd04 exitCode: 0 finishedAt: "2026-02-12T03:09:55Z" reason: Completed startedAt: "2026-02-12T03:09:55Z" volumeMounts: - mountPath: /kubeblocks name: kubeblocks - mountPath: /bitnami/clickhouse name: data - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /scripts name: scripts - mountPath: /etc/clickhouse-client name: client-config - mountPath: /shared-tools name: shared-tools - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vjdm2 readOnly: true recursiveReadOnly: Disabled phase: Running podIP: 10.244.3.229 podIPs: - ip: 10.244.3.229 qosClass: Burstable startTime: "2026-02-12T03:09:52Z" ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe pod clkhouse-oxscub-backup-ch-keeper-0 clkhouse-oxscub-backup-ch-keeper-1 clkhouse-oxscub-backup-ch-keeper-2--------------------------------------  `kubectl describe pod clkhouse-oxscub-backup-ch-keeper-0 --namespace ns-nhoig `(B  Name: clkhouse-oxscub-backup-ch-keeper-0 Namespace: ns-nhoig Priority: 0 Service Account: kb-clickhouse-keeper-1.0.2 Node: aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Start Time: Thu, 12 Feb 2026 11:09:51 +0800 Labels: app.kubernetes.io/component=clickhouse-keeper-1.0.2 app.kubernetes.io/instance=clkhouse-oxscub-backup app.kubernetes.io/managed-by=kubeblocks apps.kubeblocks.io/component-name=ch-keeper apps.kubeblocks.io/pod-name=clkhouse-oxscub-backup-ch-keeper-0 apps.kubeblocks.io/release-phase=stable apps.kubeblocks.io/service-version=22.8.21 controller-revision-hash=7f8d889464 workloads.kubeblocks.io/instance=clkhouse-oxscub-backup-ch-keeper workloads.kubeblocks.io/managed-by=InstanceSet Annotations: kubeblocks.io/restart: 2026-02-12T02:39:19Z Status: Pending SeccompProfile: RuntimeDefault IP: IPs: Controlled By: InstanceSet/clkhouse-oxscub-backup-ch-keeper Init Containers: copy-tools: Container ID: Image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl Image ID: Port: Host Port: Command: sh -c cp /bin/nc /shared-tools/ chmod +x /shared-tools/nc State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmk45 (ro) init-kbagent: Container ID: Image: docker.io/apecloud/kubeblocks-tools:1.0.2 Image ID: Port: Host Port: Command: cp -r /bin/kbagent /kubeblocks/ State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /kubeblocks from kubeblocks (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmk45 (ro) kbagent-worker: Container ID: Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: Port: Host Port: Command: /kubeblocks/kbagent Args: --server=false State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false KB_AGENT_NAMESPACE: ns-nhoig (v1:metadata.namespace) KB_AGENT_POD_NAME: clkhouse-oxscub-backup-ch-keeper-0 (v1:metadata.name) KB_AGENT_POD_UID: (v1:metadata.uid) KB_AGENT_NODE_NAME: (v1:spec.nodeName) KB_AGENT_ACTION: [{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}] KB_AGENT_PROBE: [{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}] Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /kubeblocks from kubeblocks (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmk45 (ro) Containers: clickhouse: Container ID: Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: Ports: 8123/TCP, 8443/TCP, 9000/TCP, 9009/TCP, 9010/TCP, 9440/TCP, 8001/TCP, 9181/TCP, 9234/TCP, 9281/TCP, 9444/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP Command: bash -xc /scripts/bootstrap-keeper.sh State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 300m memory: 2254857830400m Requests: cpu: 300m memory: 2254857830400m Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmk45 (ro) kbagent: Container ID: Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: Ports: 3501/TCP, 3502/TCP Host Ports: 0/TCP, 0/TCP Command: /kubeblocks/kbagent Args: --port 3501 --streaming-port 3502 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Startup: tcp-socket :3501 delay=0s timeout=1s period=10s #success=1 #failure=3 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false KB_AGENT_NAMESPACE: ns-nhoig (v1:metadata.namespace) KB_AGENT_POD_NAME: clkhouse-oxscub-backup-ch-keeper-0 (v1:metadata.name) KB_AGENT_POD_UID: (v1:metadata.uid) KB_AGENT_NODE_NAME: (v1:spec.nodeName) KB_AGENT_ACTION: [{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}] KB_AGENT_PROBE: [{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}] Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /kubeblocks from kubeblocks (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mmk45 (ro) Conditions: Type Status PodReadyToStartContainers False Initialized False Ready False ContainersReady False PodScheduled True Volumes: shared-tools: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-clkhouse-oxscub-backup-ch-keeper-0 ReadOnly: false kubeblocks: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-keeper-tpl Optional: false client-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-client-tpl Optional: false scripts: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-scripts Optional: false kube-api-access-mmk45: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m49s default-scheduler Successfully assigned ns-nhoig/clkhouse-oxscub-backup-ch-keeper-0 to aks-cicdamdpool-17242166-vmss000003 Warning FailedAttachVolume 26s (x4 over 6m54s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-7d893ff5-fd9c-4db0-9d96-82b3d63a5643" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-qcnisgon-group_cicd-aks-qcnisgon_eastus/providers/Microsoft.Compute/disks/pvc-7d893ff5-fd9c-4db0-9d96-82b3d63a5643 ------------------------------------------------------------------------------------------------------------------  `kubectl describe pod clkhouse-oxscub-backup-ch-keeper-1 --namespace ns-nhoig `(B  Name: clkhouse-oxscub-backup-ch-keeper-1 Namespace: ns-nhoig Priority: 0 Service Account: kb-clickhouse-keeper-1.0.2 Node: aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Start Time: Thu, 12 Feb 2026 11:09:51 +0800 Labels: app.kubernetes.io/component=clickhouse-keeper-1.0.2 app.kubernetes.io/instance=clkhouse-oxscub-backup app.kubernetes.io/managed-by=kubeblocks apps.kubeblocks.io/component-name=ch-keeper apps.kubeblocks.io/pod-name=clkhouse-oxscub-backup-ch-keeper-1 apps.kubeblocks.io/release-phase=stable apps.kubeblocks.io/service-version=22.8.21 controller-revision-hash=7f8d889464 kubeblocks.io/role=leader workloads.kubeblocks.io/instance=clkhouse-oxscub-backup-ch-keeper workloads.kubeblocks.io/managed-by=InstanceSet Annotations: apps.kubeblocks.io/last-role-snapshot-version: 1770865813441552 kubeblocks.io/restart: 2026-02-12T02:39:19Z Status: Running SeccompProfile: RuntimeDefault IP: 10.244.4.239 IPs: IP: 10.244.4.239 Controlled By: InstanceSet/clkhouse-oxscub-backup-ch-keeper Init Containers: copy-tools: Container ID: containerd://9634c2637729b8fb32b2aa3753b786a713f0702663fb391c2805ace1b9dbd0e3 Image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl Image ID: docker.io/apecloud/bash-busybox@sha256:0af829c4a29058ccc2c41f6fbf3b50b83b290dd02d7f164fff60bb7429da8e5a Port: Host Port: Command: sh -c cp /bin/nc /shared-tools/ chmod +x /shared-tools/nc State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 11:09:53 +0800 Finished: Thu, 12 Feb 2026 11:09:53 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vn7sv (ro) init-kbagent: Container ID: containerd://6d1e6eb51de335352a550543b4c43974108e3d629d5a959001e707d485122858 Image: docker.io/apecloud/kubeblocks-tools:1.0.2 Image ID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea Port: Host Port: Command: cp -r /bin/kbagent /kubeblocks/ State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 11:09:55 +0800 Finished: Thu, 12 Feb 2026 11:09:55 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /kubeblocks from kubeblocks (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vn7sv (ro) kbagent-worker: Container ID: containerd://b9ff453005541edb799dab39aefc66391143a8c8415cf5d10c0e7d46f2d2f19d Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb Port: Host Port: Command: /kubeblocks/kbagent Args: --server=false State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 11:09:57 +0800 Finished: Thu, 12 Feb 2026 11:09:57 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false KB_AGENT_NAMESPACE: ns-nhoig (v1:metadata.namespace) KB_AGENT_POD_NAME: clkhouse-oxscub-backup-ch-keeper-1 (v1:metadata.name) KB_AGENT_POD_UID: (v1:metadata.uid) KB_AGENT_NODE_NAME: (v1:spec.nodeName) KB_AGENT_ACTION: [{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}] KB_AGENT_PROBE: [{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}] Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /kubeblocks from kubeblocks (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vn7sv (ro) Containers: clickhouse: Container ID: containerd://2923216b80797588bb0751ee69b87d6d450c09008602586d1e342bd72c599b01 Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb Ports: 8123/TCP, 8443/TCP, 9000/TCP, 9009/TCP, 9010/TCP, 9440/TCP, 8001/TCP, 9181/TCP, 9234/TCP, 9281/TCP, 9444/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP Command: bash -xc /scripts/bootstrap-keeper.sh State: Running Started: Thu, 12 Feb 2026 11:09:58 +0800 Ready: True Restart Count: 0 Limits: cpu: 300m memory: 2254857830400m Requests: cpu: 300m memory: 2254857830400m Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vn7sv (ro) kbagent: Container ID: containerd://9dc574a554f987448ddfe1c6f54986130ccbb422b4a32888b70e6e652b29447d Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb Ports: 3501/TCP, 3502/TCP Host Ports: 0/TCP, 0/TCP Command: /kubeblocks/kbagent Args: --port 3501 --streaming-port 3502 State: Running Started: Thu, 12 Feb 2026 11:09:58 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Startup: tcp-socket :3501 delay=0s timeout=1s period=10s #success=1 #failure=3 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false KB_AGENT_NAMESPACE: ns-nhoig (v1:metadata.namespace) KB_AGENT_POD_NAME: clkhouse-oxscub-backup-ch-keeper-1 (v1:metadata.name) KB_AGENT_POD_UID: (v1:metadata.uid) KB_AGENT_NODE_NAME: (v1:spec.nodeName) KB_AGENT_ACTION: [{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}] KB_AGENT_PROBE: [{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}] Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /kubeblocks from kubeblocks (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vn7sv (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: shared-tools: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-clkhouse-oxscub-backup-ch-keeper-1 ReadOnly: false kubeblocks: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-keeper-tpl Optional: false client-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-client-tpl Optional: false scripts: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-scripts Optional: false kube-api-access-vn7sv: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m49s default-scheduler Successfully assigned ns-nhoig/clkhouse-oxscub-backup-ch-keeper-1 to aks-cicdamdpool-17242166-vmss000000 Normal Pulled 9m48s kubelet Container image "docker.io/apecloud/bash-busybox:1.37.0-musl-curl" already present on machine Normal Created 9m48s kubelet Created container: copy-tools Normal Started 9m48s kubelet Started container copy-tools Normal Pulled 9m46s kubelet Container image "docker.io/apecloud/kubeblocks-tools:1.0.2" already present on machine Normal Created 9m46s kubelet Created container: init-kbagent Normal Started 9m46s kubelet Started container init-kbagent Normal Pulled 9m44s kubelet Container image "docker.io/apecloud/clickhouse:22.8.21-debian-11-r33" already present on machine Normal Created 9m44s kubelet Created container: kbagent-worker Normal Started 9m44s kubelet Started container kbagent-worker Normal Pulled 9m43s kubelet Container image "docker.io/apecloud/clickhouse:22.8.21-debian-11-r33" already present on machine Normal Created 9m43s kubelet Created container: clickhouse Normal Started 9m43s kubelet Started container clickhouse Normal Pulled 9m43s kubelet Container image "docker.io/apecloud/clickhouse:22.8.21-debian-11-r33" already present on machine Normal Created 9m43s kubelet Created container: kbagent Normal Started 9m43s kubelet Started container kbagent Normal roleProbe 9m28s kbagent {"instance":"clkhouse-oxscub-backup-ch-keeper","probe":"roleProbe","code":0,"output":"bGVhZGVy"} ------------------------------------------------------------------------------------------------------------------  `kubectl describe pod clkhouse-oxscub-backup-ch-keeper-2 --namespace ns-nhoig `(B  Name: clkhouse-oxscub-backup-ch-keeper-2 Namespace: ns-nhoig Priority: 0 Service Account: kb-clickhouse-keeper-1.0.2 Node: aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Start Time: Thu, 12 Feb 2026 11:09:52 +0800 Labels: app.kubernetes.io/component=clickhouse-keeper-1.0.2 app.kubernetes.io/instance=clkhouse-oxscub-backup app.kubernetes.io/managed-by=kubeblocks apps.kubeblocks.io/component-name=ch-keeper apps.kubeblocks.io/pod-name=clkhouse-oxscub-backup-ch-keeper-2 apps.kubeblocks.io/release-phase=stable apps.kubeblocks.io/service-version=22.8.21 controller-revision-hash=7f8d889464 kubeblocks.io/role=follower workloads.kubeblocks.io/instance=clkhouse-oxscub-backup-ch-keeper workloads.kubeblocks.io/managed-by=InstanceSet Annotations: apps.kubeblocks.io/last-role-snapshot-version: 1770865811720123 kubeblocks.io/restart: 2026-02-12T02:39:19Z Status: Running SeccompProfile: RuntimeDefault IP: 10.244.3.229 IPs: IP: 10.244.3.229 Controlled By: InstanceSet/clkhouse-oxscub-backup-ch-keeper Init Containers: copy-tools: Container ID: containerd://0b07dfb914f7038ad4eeb79e3592db2c8c066f630577f93df62a50b2d0d6fe54 Image: docker.io/apecloud/bash-busybox:1.37.0-musl-curl Image ID: docker.io/apecloud/bash-busybox@sha256:0af829c4a29058ccc2c41f6fbf3b50b83b290dd02d7f164fff60bb7429da8e5a Port: Host Port: Command: sh -c cp /bin/nc /shared-tools/ chmod +x /shared-tools/nc State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 11:09:53 +0800 Finished: Thu, 12 Feb 2026 11:09:53 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjdm2 (ro) init-kbagent: Container ID: containerd://b3e92ff8d4b0207fcf7a9ff4a7df3a1002d71167ed834cc29029ecbf88ba4b9b Image: docker.io/apecloud/kubeblocks-tools:1.0.2 Image ID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea Port: Host Port: Command: cp -r /bin/kbagent /kubeblocks/ State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 11:09:54 +0800 Finished: Thu, 12 Feb 2026 11:09:54 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /kubeblocks from kubeblocks (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjdm2 (ro) kbagent-worker: Container ID: containerd://f54f468ce9cc9e2516fcff943a855d23587df8fbb772b8334ab5f38dbc8bdd04 Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb Port: Host Port: Command: /kubeblocks/kbagent Args: --server=false State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 11:09:55 +0800 Finished: Thu, 12 Feb 2026 11:09:55 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false KB_AGENT_NAMESPACE: ns-nhoig (v1:metadata.namespace) KB_AGENT_POD_NAME: clkhouse-oxscub-backup-ch-keeper-2 (v1:metadata.name) KB_AGENT_POD_UID: (v1:metadata.uid) KB_AGENT_NODE_NAME: (v1:spec.nodeName) KB_AGENT_ACTION: [{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}] KB_AGENT_PROBE: [{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}] Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /kubeblocks from kubeblocks (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjdm2 (ro) Containers: clickhouse: Container ID: containerd://ca1eba8752e7e7bffb528694d0996079e79e40593ebae9dc85dbff9432b5f992 Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb Ports: 8123/TCP, 8443/TCP, 9000/TCP, 9009/TCP, 9010/TCP, 9440/TCP, 8001/TCP, 9181/TCP, 9234/TCP, 9281/TCP, 9444/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP Command: bash -xc /scripts/bootstrap-keeper.sh State: Running Started: Thu, 12 Feb 2026 11:09:56 +0800 Ready: True Restart Count: 0 Limits: cpu: 300m memory: 2254857830400m Requests: cpu: 300m memory: 2254857830400m Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjdm2 (ro) kbagent: Container ID: containerd://9b0f4c2e8fab669243d957da1fcf72b15c3f1f650e79ec1a117bea8efe045596 Image: docker.io/apecloud/clickhouse:22.8.21-debian-11-r33 Image ID: docker.io/apecloud/clickhouse@sha256:6081d2b1807cc747b5a3e6da3fdccaaefedfe550a871c91351f3aab595ae7cfb Ports: 3501/TCP, 3502/TCP Host Ports: 0/TCP, 0/TCP Command: /kubeblocks/kbagent Args: --port 3501 --streaming-port 3502 State: Running Started: Thu, 12 Feb 2026 11:09:56 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Startup: tcp-socket :3501 delay=0s timeout=1s period=10s #success=1 #failure=3 Environment Variables from: clkhouse-oxscub-backup-ch-keeper-env ConfigMap Optional: false Environment: CLICKHOUSE_ADMIN_PASSWORD: Optional: false KB_AGENT_NAMESPACE: ns-nhoig (v1:metadata.namespace) KB_AGENT_POD_NAME: clkhouse-oxscub-backup-ch-keeper-2 (v1:metadata.name) KB_AGENT_POD_UID: (v1:metadata.uid) KB_AGENT_NODE_NAME: (v1:spec.nodeName) KB_AGENT_ACTION: [{"name":"switchover","exec":{"command":["bash","-c","/scripts/keeper-switchover.sh \u003e\u003e /bitnami/clickhouse/keeper-switchover.log 2\u003e\u00261\n"]}},{"name":"memberJoin","exec":{"command":["bash","-c","/scripts/keeper-member-join.sh \u003e /tmp/keeper-member-join.log 2\u003e\u00261\n"]}},{"name":"memberLeave","exec":{"command":["bash","-c","/scripts/keeper-member-leave.sh \u003e /tmp/keeper-member-leave.log 2\u003e\u00261\n"]}},{"name":"roleProbe","exec":{"command":["bash","-c","/scripts/keeper-role-probe.sh\n"]},"timeoutSeconds":3}] KB_AGENT_PROBE: [{"instance":"clkhouse-oxscub-backup-ch-keeper","action":"roleProbe","initialDelaySeconds":15,"periodSeconds":3}] Mounts: /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /kubeblocks from kubeblocks (rw) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /scripts from scripts (rw) /shared-tools from shared-tools (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjdm2 (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: shared-tools: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-clkhouse-oxscub-backup-ch-keeper-2 ReadOnly: false kubeblocks: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-keeper-tpl Optional: false client-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-client-tpl Optional: false scripts: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-oxscub-backup-ch-keeper-clickhouse-scripts Optional: false kube-api-access-vjdm2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m49s default-scheduler Successfully assigned ns-nhoig/clkhouse-oxscub-backup-ch-keeper-2 to aks-cicdamdpool-17242166-vmss000001 Normal Pulled 9m49s kubelet Container image "docker.io/apecloud/bash-busybox:1.37.0-musl-curl" already present on machine Normal Created 9m49s kubelet Created container: copy-tools Normal Started 9m49s kubelet Started container copy-tools Normal Pulled 9m48s kubelet Container image "docker.io/apecloud/kubeblocks-tools:1.0.2" already present on machine Normal Created 9m48s kubelet Created container: init-kbagent Normal Started 9m48s kubelet Started container init-kbagent Normal Pulled 9m47s kubelet Container image "docker.io/apecloud/clickhouse:22.8.21-debian-11-r33" already present on machine Normal Created 9m47s kubelet Created container: kbagent-worker Normal Started 9m47s kubelet Started container kbagent-worker Normal Pulled 9m46s kubelet Container image "docker.io/apecloud/clickhouse:22.8.21-debian-11-r33" already present on machine Normal Created 9m46s kubelet Created container: clickhouse Normal Started 9m46s kubelet Started container clickhouse Normal Pulled 9m46s kubelet Container image "docker.io/apecloud/clickhouse:22.8.21-debian-11-r33" already present on machine Normal Created 9m46s kubelet Created container: kbagent Normal Started 9m46s kubelet Started container kbagent Normal roleProbe 9m30s kbagent {"instance":"clkhouse-oxscub-backup-ch-keeper","probe":"roleProbe","code":0,"output":"Zm9sbG93ZXI="} ------------------------------------------------------------------------------------------------------------------ --------------------------------------pod clkhouse-oxscub-backup-ch-keeper-0 clkhouse-oxscub-backup-ch-keeper-1 clkhouse-oxscub-backup-ch-keeper-2--------------------------------------  `kubectl logs clkhouse-oxscub-backup-ch-keeper-0 --namespace ns-nhoig --tail 500`(B  ------------------------------------------------------------------------------------------------------------------  `kubectl logs clkhouse-oxscub-backup-ch-keeper-1 --namespace ns-nhoig --tail 500`(B  + /scripts/bootstrap-keeper.sh grep: /opt/bitnami/clickhouse/etc/conf.d/ch-keeper_00_default_overrides.xml: No such file or directory clickhouse 03:09:58.24 INFO  ==> clickhouse 03:09:58.29 INFO  ==> Welcome to the Bitnami clickhouse container clickhouse 03:09:58.30 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers clickhouse 03:09:58.30 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues clickhouse 03:09:58.30 INFO  ==> clickhouse 03:09:58.30 INFO  ==> ** Starting ClickHouse setup ** clickhouse 03:09:58.41 INFO  ==> ** ClickHouse setup finished! ** clickhouse 03:09:58.50 INFO  ==> ** Starting ClickHouse ** Processing configuration file '/opt/bitnami/clickhouse/etc/config.xml'. Merging configuration file '/opt/bitnami/clickhouse/etc/conf.d/ch_keeper_00_default_overrides.xml'. Logging information to /bitnami/clickhouse/log/keeper-server.log Logging errors to /bitnami/clickhouse/log/keeper-server.err.log 2026.02.12 03:09:58.694220 [ 1 ] {} Application: Will watch for the process with pid 45 2026.02.12 03:09:58.694337 [ 45 ] {} Application: Forked a child process to watch 2026.02.12 03:09:58.695135 [ 45 ] {} SentryWriter: Sending crash reports is disabled 2026.02.12 03:09:59.013745 [ 45 ] {} : Starting ClickHouse 22.8.21.38 with revision 54465, build id: CD723F248A3FE1E1, PID 45 2026.02.12 03:09:59.013931 [ 45 ] {} Application: starting up 2026.02.12 03:09:59.013974 [ 45 ] {} Application: OS name: Linux, version: 5.15.0-1102-azure, architecture: x86_64 2026.02.12 03:09:59.094974 [ 45 ] {} Context: Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled 2026.02.12 03:09:59.707761 [ 45 ] {} Application: Integrity check of the executable successfully passed (checksum: FF3E77B91C662ED781DCD0F488905DEF) 2026.02.12 03:09:59.715257 [ 45 ] {} StatusFile: Status file /bitnami/clickhouse/data/status already exists - unclean restart. Contents: PID: 42 Started at: 2026-02-12 03:08:06 Revision: 54465 2026.02.12 03:09:59.715975 [ 45 ] {} SensitiveDataMaskerConfigRead: 1 query masking rules loaded. 2026.02.12 03:09:59.795583 [ 45 ] {} Application: Setting max_server_memory_usage was set to 56.51 GiB (62.79 GiB available * 0.90 max_server_memory_usage_to_ram_ratio) 2026.02.12 03:09:59.799289 [ 45 ] {} CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again. 2026.02.12 03:09:59.799621 [ 45 ] {} Context: Cannot connect to ZooKeeper (or Keeper) before internal Keeper start, will wait for Keeper synchronously 2026.02.12 03:09:59.807124 [ 45 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:09:59.807607 [ 45 ] {} bool DB::(anonymous namespace)::isLocalhost(const std::string &): Code: 198. DB::Exception: Not found address of host: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local. (DNS_ERROR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. ? @ 0xa50a24a in /opt/bitnami/clickhouse/bin/clickhouse 2. ? @ 0xa508324 in /opt/bitnami/clickhouse/bin/clickhouse 3. ? @ 0xa508776 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::DNSResolver::resolveHost(std::__1::basic_string, std::__1::allocator > const&) @ 0xa507fda in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::KeeperStateManager::parseServersConfiguration(Poco::Util::AbstractConfiguration const&, bool) const @ 0x161474ab in /opt/bitnami/clickhouse/bin/clickhouse 6. DB::KeeperStateManager::KeeperStateManager(int, std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, Poco::Util::AbstractConfiguration const&, std::__1::shared_ptr const&) @ 0x161495a6 in /opt/bitnami/clickhouse/bin/clickhouse 7. DB::KeeperServer::KeeperServer(std::__1::shared_ptr const&, Poco::Util::AbstractConfiguration const&, ConcurrentBoundedQueue&, ConcurrentBoundedQueue&) @ 0x161075bb in /opt/bitnami/clickhouse/bin/clickhouse 8. DB::KeeperDispatcher::initialize(Poco::Util::AbstractConfiguration const&, bool, bool) @ 0x160f8d9c in /opt/bitnami/clickhouse/bin/clickhouse 9. DB::Context::initializeKeeperDispatcher(bool) const @ 0x1491118d in /opt/bitnami/clickhouse/bin/clickhouse 10. DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) @ 0xa4a3ee0 in /opt/bitnami/clickhouse/bin/clickhouse 11. Poco::Util::Application::run() @ 0x18a51946 in /opt/bitnami/clickhouse/bin/clickhouse 12. DB::Server::run() @ 0xa494e9e in /opt/bitnami/clickhouse/bin/clickhouse 13. mainEntryClickHouseServer(int, char**) @ 0xa492437 in /opt/bitnami/clickhouse/bin/clickhouse 14. main @ 0xa3f138b in /opt/bitnami/clickhouse/bin/clickhouse 15. __libc_start_main @ 0x23d0a in /lib/x86_64-linux-gnu/libc-2.31.so 16. _start @ 0xa1b0a2e in /opt/bitnami/clickhouse/bin/clickhouse (version 22.8.21.38 (official build)) 2026.02.12 03:09:59.809380 [ 45 ] {} KeeperLogStore: force_sync enabled 2026.02.12 03:09:59.809625 [ 45 ] {} KeeperServer: Preprocessing 1 log entries 2026.02.12 03:09:59.809659 [ 45 ] {} KeeperServer: Preprocessing done 2026.02.12 03:09:59.809675 [ 45 ] {} KeeperServer: No config in snapshot, will use config from log store with log index 1 2026.02.12 03:09:59.812439 [ 45 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:09:59.812837 [ 45 ] {} bool DB::(anonymous namespace)::isLocalhost(const std::string &): Code: 198. DB::Exception: Not found address of host: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local. (DNS_ERROR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. ? @ 0xa50a24a in /opt/bitnami/clickhouse/bin/clickhouse 2. ? @ 0xa508324 in /opt/bitnami/clickhouse/bin/clickhouse 3. ? @ 0xa508776 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::DNSResolver::resolveHost(std::__1::basic_string, std::__1::allocator > const&) @ 0xa507fda in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::KeeperStateManager::parseServersConfiguration(Poco::Util::AbstractConfiguration const&, bool) const @ 0x161474ab in /opt/bitnami/clickhouse/bin/clickhouse 6. DB::KeeperServer::startup(Poco::Util::AbstractConfiguration const&, bool) @ 0x1610c8b1 in /opt/bitnami/clickhouse/bin/clickhouse 7. DB::KeeperDispatcher::initialize(Poco::Util::AbstractConfiguration const&, bool, bool) @ 0x160f8fde in /opt/bitnami/clickhouse/bin/clickhouse 8. DB::Context::initializeKeeperDispatcher(bool) const @ 0x1491118d in /opt/bitnami/clickhouse/bin/clickhouse 9. DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) @ 0xa4a3ee0 in /opt/bitnami/clickhouse/bin/clickhouse 10. Poco::Util::Application::run() @ 0x18a51946 in /opt/bitnami/clickhouse/bin/clickhouse 11. DB::Server::run() @ 0xa494e9e in /opt/bitnami/clickhouse/bin/clickhouse 12. mainEntryClickHouseServer(int, char**) @ 0xa492437 in /opt/bitnami/clickhouse/bin/clickhouse 13. main @ 0xa3f138b in /opt/bitnami/clickhouse/bin/clickhouse 14. __libc_start_main @ 0x23d0a in /lib/x86_64-linux-gnu/libc-2.31.so 15. _start @ 0xa1b0a2e in /opt/bitnami/clickhouse/bin/clickhouse (version 22.8.21.38 (official build)) 2026.02.12 03:09:59.815055 [ 45 ] {} KeeperStateManager: Read state from /bitnami/clickhouse/coordination/state 2026.02.12 03:10:01.298663 [ 57 ] {} RaftInstance: Election timeout, initiate leader election 2026.02.12 03:10:01.310735 [ 63 ] {} RaftInstance: failed to send vote request: peer 1 (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local:9234) is busy 2026.02.12 03:10:01.310922 [ 66 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:01.341402 [ 72 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:01.350161 [ 45 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:01.350454 [ 45 ] {} bool DB::(anonymous namespace)::isLocalhost(const std::string &): Code: 198. DB::Exception: Not found address of host: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local. (DNS_ERROR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. ? @ 0xa50a24a in /opt/bitnami/clickhouse/bin/clickhouse 2. ? @ 0xa508324 in /opt/bitnami/clickhouse/bin/clickhouse 3. ? @ 0xa508776 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::DNSResolver::resolveHost(std::__1::basic_string, std::__1::allocator > const&) @ 0xa507fda in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::KeeperStateManager::parseServersConfiguration(Poco::Util::AbstractConfiguration const&, bool) const @ 0x161474ab in /opt/bitnami/clickhouse/bin/clickhouse 6. DB::KeeperStateManager::getConfigurationDiff(Poco::Util::AbstractConfiguration const&) const @ 0x1614c12d in /opt/bitnami/clickhouse/bin/clickhouse 7. DB::KeeperServer::getConfigurationDiff(Poco::Util::AbstractConfiguration const&) @ 0x1610ee5d in /opt/bitnami/clickhouse/bin/clickhouse 8. DB::KeeperDispatcher::updateConfiguration(Poco::Util::AbstractConfiguration const&) @ 0x160fa348 in /opt/bitnami/clickhouse/bin/clickhouse 9. DB::KeeperDispatcher::initialize(Poco::Util::AbstractConfiguration const&, bool, bool) @ 0x160f99a3 in /opt/bitnami/clickhouse/bin/clickhouse 10. DB::Context::initializeKeeperDispatcher(bool) const @ 0x1491118d in /opt/bitnami/clickhouse/bin/clickhouse 11. DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) @ 0xa4a3ee0 in /opt/bitnami/clickhouse/bin/clickhouse 12. Poco::Util::Application::run() @ 0x18a51946 in /opt/bitnami/clickhouse/bin/clickhouse 13. DB::Server::run() @ 0xa494e9e in /opt/bitnami/clickhouse/bin/clickhouse 14. mainEntryClickHouseServer(int, char**) @ 0xa492437 in /opt/bitnami/clickhouse/bin/clickhouse 15. main @ 0xa3f138b in /opt/bitnami/clickhouse/bin/clickhouse 16. __libc_start_main @ 0x23d0a in /lib/x86_64-linux-gnu/libc-2.31.so 17. _start @ 0xa1b0a2e in /opt/bitnami/clickhouse/bin/clickhouse (version 22.8.21.38 (official build)) 2026.02.12 03:10:01.350741 [ 62 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:01.350923 [ 45 ] {} Application: Listening for Keeper (tcp): [::]:9181 2026.02.12 03:10:01.355096 [ 45 ] {} Application: Uncompressed cache policy name 2026.02.12 03:10:01.356170 [ 45 ] {} Context: Initialized background executor for merges and mutations with num_threads=16, num_tasks=32 2026.02.12 03:10:01.357909 [ 45 ] {} Context: Initialized background executor for move operations with num_threads=8, num_tasks=8 2026.02.12 03:10:01.360776 [ 45 ] {} Context: Initialized background executor for fetches with num_threads=8, num_tasks=8 2026.02.12 03:10:01.363134 [ 45 ] {} Context: Initialized background executor for common operations (e.g. clearing old parts) with num_threads=8, num_tasks=8 2026.02.12 03:10:01.363552 [ 45 ] {} Application: Loading user defined objects from /bitnami/clickhouse/data/ 2026.02.12 03:10:01.363662 [ 45 ] {} Application: Loading metadata from /bitnami/clickhouse/data/ 2026.02.12 03:10:01.399206 [ 45 ] {} DatabaseAtomic (system): Metadata processed, database system has 3 tables and 0 dictionaries in total. 2026.02.12 03:10:01.399268 [ 45 ] {} TablesLoader: Parsed metadata of 3 tables in 1 databases in 0.00345438 sec 2026.02.12 03:10:01.399311 [ 45 ] {} TablesLoader: Loading 3 tables with 0 dependency level 2026.02.12 03:10:01.513158 [ 45 ] {} DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal. 2026.02.12 03:10:01.513366 [ 45 ] {} DatabaseAtomic (default): Metadata processed, database default has 0 tables and 0 dictionaries in total. 2026.02.12 03:10:01.513386 [ 45 ] {} TablesLoader: Parsed metadata of 0 tables in 1 databases in 5.5602e-05 sec 2026.02.12 03:10:01.513406 [ 45 ] {} TablesLoader: Loading 0 tables with 0 dependency level 2026.02.12 03:10:01.513420 [ 45 ] {} DatabaseAtomic (default): Starting up tables. 2026.02.12 03:10:01.513433 [ 45 ] {} DatabaseAtomic (system): Starting up tables. 2026.02.12 03:10:01.514573 [ 135 ] {} BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 128 threads 2026.02.12 03:10:01.515650 [ 144 ] {} bool DB::(anonymous namespace)::checkPermissionsImpl(): Code: 412. DB::Exception: Can't receive Netlink response: error -2. (NETLINK_ERROR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. ? @ 0xa432ba3 in /opt/bitnami/clickhouse/bin/clickhouse 2. ? @ 0xa432d98 in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::TaskStatsInfoGetter::TaskStatsInfoGetter() @ 0xa4325b7 in /opt/bitnami/clickhouse/bin/clickhouse 4. ? @ 0xa432386 in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::TaskStatsInfoGetter::checkPermissions() @ 0xa432329 in /opt/bitnami/clickhouse/bin/clickhouse 6. DB::TasksStatsCounters::create(unsigned long) @ 0xa42a65d in /opt/bitnami/clickhouse/bin/clickhouse 7. DB::ThreadStatus::initPerformanceCounters() @ 0x150cfda5 in /opt/bitnami/clickhouse/bin/clickhouse 8. DB::ThreadStatus::setupState(std::__1::shared_ptr const&) @ 0x150cf6f9 in /opt/bitnami/clickhouse/bin/clickhouse 9. DB::CurrentThread::initializeQuery() @ 0x150d2360 in /opt/bitnami/clickhouse/bin/clickhouse 10. DB::BackgroundSchedulePool::attachToThreadGroup() @ 0x13f38019 in /opt/bitnami/clickhouse/bin/clickhouse 11. DB::BackgroundSchedulePool::threadFunction() @ 0x13f38176 in /opt/bitnami/clickhouse/bin/clickhouse 12. ? @ 0x13f390cc in /opt/bitnami/clickhouse/bin/clickhouse 13. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xa4c4828 in /opt/bitnami/clickhouse/bin/clickhouse 14. ? @ 0xa4c7a3d in /opt/bitnami/clickhouse/bin/clickhouse 15. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 16. clone @ 0xfba2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.8.21.38 (official build)) 2026.02.12 03:10:01.849643 [ 45 ] {} Application: Tasks stats provider: procfs 2026.02.12 03:10:01.854164 [ 67 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:02.297632 [ 45 ] {} CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again. 2026.02.12 03:10:02.303748 [ 45 ] {} DNSCacheUpdater: Update period 15 seconds 2026.02.12 03:10:02.303852 [ 45 ] {} Application: Available RAM: 62.79 GiB; physical cores: 8; logical cores: 16. 2026.02.12 03:10:02.304763 [ 45 ] {} Application: Listening for http://[::]:8123 2026.02.12 03:10:02.304876 [ 45 ] {} Application: Listening for native protocol (tcp): [::]:9000 2026.02.12 03:10:02.304982 [ 45 ] {} Application: Listening for MySQL compatibility protocol: [::]:9004 2026.02.12 03:10:02.305063 [ 45 ] {} Application: Listening for PostgreSQL compatibility protocol: [::]:9005 2026.02.12 03:10:02.305141 [ 45 ] {} Application: Listening for Prometheus: http://[::]:8001 2026.02.12 03:10:02.305232 [ 45 ] {} Application: Listening for replica communication (interserver): http://[::]:9009 2026.02.12 03:10:02.305258 [ 45 ] {} Application: Ready for connections. 2026.02.12 03:10:02.308326 [ 159 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:02.308835 [ 159 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:10:02.434950 [ 71 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:03.084124 [ 62 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:03.798097 [ 60 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:04.534536 [ 72 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:05.285996 [ 57 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:06.035549 [ 70 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:06.785026 [ 59 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:07.536041 [ 70 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:08.286568 [ 59 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:09.037533 [ 67 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:09.787355 [ 71 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:10.533568 [ 64 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:11.287517 [ 61 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:12.036832 [ 57 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:12.785433 [ 66 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:13.537258 [ 57 ] {} RaftInstance: too verbose RPC error on peer (1), will suppress it from now 2026.02.12 03:10:17.312681 [ 200 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:17.313442 [ 200 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:10:17.313491 [ 200 ] {} DNSCacheUpdater: IPs of some hosts have been changed. Will reload cluster config. 2026.02.12 03:10:32.319229 [ 231 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:32.320633 [ 231 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:10:47.324969 [ 141 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:47.326383 [ 141 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:11:02.330483 [ 230 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:11:02.332258 [ 230 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:11:02.332294 [ 230 ] {} DNSResolver: Cached hosts dropped: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:11:02.332306 [ 230 ] {} DNSCacheUpdater: IPs of some hosts have been changed. Will reload cluster config. 2026.02.12 03:13:05.000986 [ 133 ] {} void DB::AsynchronousMetrics::update(std::chrono::system_clock::time_point): Code: 74. DB::ErrnoException: Cannot read from file: /sys/block/sdf/stat, errno: 19, strerror: No such device. (CANNOT_READ_FROM_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. DB::throwFromErrnoWithPath(std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, int, int) @ 0xa4042a0 in /opt/bitnami/clickhouse/bin/clickhouse 2. DB::ReadBufferFromFileDescriptor::nextImpl() @ 0xa46390e in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::AsynchronousMetrics::BlockDeviceStatValues::read(DB::ReadBuffer&) @ 0x146e2fd1 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::AsynchronousMetrics::update(std::__1::chrono::time_point > >) @ 0x146d7d6f in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::AsynchronousMetrics::run() @ 0x146e24be in /opt/bitnami/clickhouse/bin/clickhouse 6. ? @ 0x146e6bac in /opt/bitnami/clickhouse/bin/clickhouse 7. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xa4c4828 in /opt/bitnami/clickhouse/bin/clickhouse 8. ? @ 0xa4c7a3d in /opt/bitnami/clickhouse/bin/clickhouse 9. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 10. clone @ 0xfba2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.8.21.38 (official build)) ------------------------------------------------------------------------------------------------------------------  `kubectl logs clkhouse-oxscub-backup-ch-keeper-2 --namespace ns-nhoig --tail 500`(B  + /scripts/bootstrap-keeper.sh grep: /opt/bitnami/clickhouse/etc/conf.d/ch-keeper_00_default_overrides.xml: No such file or directory clickhouse 03:09:56.59 INFO  ==> clickhouse 03:09:56.60 INFO  ==> Welcome to the Bitnami clickhouse container clickhouse 03:09:56.60 INFO  ==> Subscribe to project updates by watching https://github.com/bitnami/containers clickhouse 03:09:56.60 INFO  ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues clickhouse 03:09:56.60 INFO  ==> clickhouse 03:09:56.60 INFO  ==> ** Starting ClickHouse setup ** clickhouse 03:09:56.72 INFO  ==> ** ClickHouse setup finished! ** clickhouse 03:09:56.80 INFO  ==> ** Starting ClickHouse ** Processing configuration file '/opt/bitnami/clickhouse/etc/config.xml'. Merging configuration file '/opt/bitnami/clickhouse/etc/conf.d/ch_keeper_00_default_overrides.xml'. Logging information to /bitnami/clickhouse/log/keeper-server.log Logging errors to /bitnami/clickhouse/log/keeper-server.err.log 2026.02.12 03:09:57.000368 [ 1 ] {} Application: Will watch for the process with pid 45 2026.02.12 03:09:57.000518 [ 45 ] {} Application: Forked a child process to watch 2026.02.12 03:09:57.001358 [ 45 ] {} SentryWriter: Sending crash reports is disabled 2026.02.12 03:09:57.313643 [ 45 ] {} : Starting ClickHouse 22.8.21.38 with revision 54465, build id: CD723F248A3FE1E1, PID 45 2026.02.12 03:09:57.313819 [ 45 ] {} Application: starting up 2026.02.12 03:09:57.313849 [ 45 ] {} Application: OS name: Linux, version: 5.15.0-1102-azure, architecture: x86_64 2026.02.12 03:09:57.326584 [ 45 ] {} Context: Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled 2026.02.12 03:09:58.000675 [ 45 ] {} Application: Integrity check of the executable successfully passed (checksum: FF3E77B91C662ED781DCD0F488905DEF) 2026.02.12 03:09:58.013386 [ 45 ] {} SensitiveDataMaskerConfigRead: 1 query masking rules loaded. 2026.02.12 03:09:58.020123 [ 45 ] {} Application: Setting max_server_memory_usage was set to 56.51 GiB (62.79 GiB available * 0.90 max_server_memory_usage_to_ram_ratio) 2026.02.12 03:09:58.021587 [ 45 ] {} CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again. 2026.02.12 03:09:58.021803 [ 45 ] {} Context: Cannot connect to ZooKeeper (or Keeper) before internal Keeper start, will wait for Keeper synchronously 2026.02.12 03:09:58.103081 [ 45 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:09:58.103564 [ 45 ] {} bool DB::(anonymous namespace)::isLocalhost(const std::string &): Code: 198. DB::Exception: Not found address of host: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local. (DNS_ERROR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. ? @ 0xa50a24a in /opt/bitnami/clickhouse/bin/clickhouse 2. ? @ 0xa508324 in /opt/bitnami/clickhouse/bin/clickhouse 3. ? @ 0xa508776 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::DNSResolver::resolveHost(std::__1::basic_string, std::__1::allocator > const&) @ 0xa507fda in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::KeeperStateManager::parseServersConfiguration(Poco::Util::AbstractConfiguration const&, bool) const @ 0x161474ab in /opt/bitnami/clickhouse/bin/clickhouse 6. DB::KeeperStateManager::KeeperStateManager(int, std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, Poco::Util::AbstractConfiguration const&, std::__1::shared_ptr const&) @ 0x161495a6 in /opt/bitnami/clickhouse/bin/clickhouse 7. DB::KeeperServer::KeeperServer(std::__1::shared_ptr const&, Poco::Util::AbstractConfiguration const&, ConcurrentBoundedQueue&, ConcurrentBoundedQueue&) @ 0x161075bb in /opt/bitnami/clickhouse/bin/clickhouse 8. DB::KeeperDispatcher::initialize(Poco::Util::AbstractConfiguration const&, bool, bool) @ 0x160f8d9c in /opt/bitnami/clickhouse/bin/clickhouse 9. DB::Context::initializeKeeperDispatcher(bool) const @ 0x1491118d in /opt/bitnami/clickhouse/bin/clickhouse 10. DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) @ 0xa4a3ee0 in /opt/bitnami/clickhouse/bin/clickhouse 11. Poco::Util::Application::run() @ 0x18a51946 in /opt/bitnami/clickhouse/bin/clickhouse 12. DB::Server::run() @ 0xa494e9e in /opt/bitnami/clickhouse/bin/clickhouse 13. mainEntryClickHouseServer(int, char**) @ 0xa492437 in /opt/bitnami/clickhouse/bin/clickhouse 14. main @ 0xa3f138b in /opt/bitnami/clickhouse/bin/clickhouse 15. __libc_start_main @ 0x23d0a in /lib/x86_64-linux-gnu/libc-2.31.so 16. _start @ 0xa1b0a2e in /opt/bitnami/clickhouse/bin/clickhouse (version 22.8.21.38 (official build)) 2026.02.12 03:09:58.105086 [ 45 ] {} KeeperLogStore: force_sync enabled 2026.02.12 03:09:58.105473 [ 45 ] {} KeeperServer: Preprocessing 0 log entries 2026.02.12 03:09:58.105502 [ 45 ] {} KeeperServer: Preprocessing done 2026.02.12 03:09:58.105534 [ 45 ] {} KeeperServer: Will use config from log store with log index 1 2026.02.12 03:09:58.117075 [ 45 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:09:58.117356 [ 45 ] {} bool DB::(anonymous namespace)::isLocalhost(const std::string &): Code: 198. DB::Exception: Not found address of host: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local. (DNS_ERROR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. ? @ 0xa50a24a in /opt/bitnami/clickhouse/bin/clickhouse 2. ? @ 0xa508324 in /opt/bitnami/clickhouse/bin/clickhouse 3. ? @ 0xa508776 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::DNSResolver::resolveHost(std::__1::basic_string, std::__1::allocator > const&) @ 0xa507fda in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::KeeperStateManager::parseServersConfiguration(Poco::Util::AbstractConfiguration const&, bool) const @ 0x161474ab in /opt/bitnami/clickhouse/bin/clickhouse 6. DB::KeeperServer::startup(Poco::Util::AbstractConfiguration const&, bool) @ 0x1610c8b1 in /opt/bitnami/clickhouse/bin/clickhouse 7. DB::KeeperDispatcher::initialize(Poco::Util::AbstractConfiguration const&, bool, bool) @ 0x160f8fde in /opt/bitnami/clickhouse/bin/clickhouse 8. DB::Context::initializeKeeperDispatcher(bool) const @ 0x1491118d in /opt/bitnami/clickhouse/bin/clickhouse 9. DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) @ 0xa4a3ee0 in /opt/bitnami/clickhouse/bin/clickhouse 10. Poco::Util::Application::run() @ 0x18a51946 in /opt/bitnami/clickhouse/bin/clickhouse 11. DB::Server::run() @ 0xa494e9e in /opt/bitnami/clickhouse/bin/clickhouse 12. mainEntryClickHouseServer(int, char**) @ 0xa492437 in /opt/bitnami/clickhouse/bin/clickhouse 13. main @ 0xa3f138b in /opt/bitnami/clickhouse/bin/clickhouse 14. __libc_start_main @ 0x23d0a in /lib/x86_64-linux-gnu/libc-2.31.so 15. _start @ 0xa1b0a2e in /opt/bitnami/clickhouse/bin/clickhouse (version 22.8.21.38 (official build)) 2026.02.12 03:09:58.119194 [ 45 ] {} KeeperStateManager: Read state from /bitnami/clickhouse/coordination/state 2026.02.12 03:09:59.185801 [ 57 ] {} RaftInstance: Election timeout, initiate leader election 2026.02.12 03:09:59.194831 [ 60 ] {} RaftInstance: peer (2) response error: failed to connect to peer 2, clkhouse-oxscub-backup-ch-keeper-1.clkhouse-oxscub-backup-ch-keeper-he 2026.02.12 03:09:59.199024 [ 63 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:01.068166 [ 62 ] {} RaftInstance: Election timeout, initiate leader election 2026.02.12 03:10:01.068358 [ 62 ] {} RaftInstance: total 1 nodes (including this node) responded for pre-vote (term 1, live 0, dead 1), at least 2 nodes should respond. failure count 1 2026.02.12 03:10:01.072690 [ 66 ] {} RaftInstance: peer (1) response error: failed to resolve host clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local due to error 1, Host not found (authoritative) 2026.02.12 03:10:01.075824 [ 64 ] {} RaftInstance: peer (2) response error: failed to connect to peer 2, clkhouse-oxscub-backup-ch-keeper-1.clkhouse-oxscub-backup-ch-keeper-he 2026.02.12 03:10:01.351530 [ 45 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:01.351835 [ 45 ] {} bool DB::(anonymous namespace)::isLocalhost(const std::string &): Code: 198. DB::Exception: Not found address of host: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local. (DNS_ERROR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. ? @ 0xa50a24a in /opt/bitnami/clickhouse/bin/clickhouse 2. ? @ 0xa508324 in /opt/bitnami/clickhouse/bin/clickhouse 3. ? @ 0xa508776 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::DNSResolver::resolveHost(std::__1::basic_string, std::__1::allocator > const&) @ 0xa507fda in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::KeeperStateManager::parseServersConfiguration(Poco::Util::AbstractConfiguration const&, bool) const @ 0x161474ab in /opt/bitnami/clickhouse/bin/clickhouse 6. DB::KeeperStateManager::getConfigurationDiff(Poco::Util::AbstractConfiguration const&) const @ 0x1614c12d in /opt/bitnami/clickhouse/bin/clickhouse 7. DB::KeeperServer::getConfigurationDiff(Poco::Util::AbstractConfiguration const&) @ 0x1610ee5d in /opt/bitnami/clickhouse/bin/clickhouse 8. DB::KeeperDispatcher::updateConfiguration(Poco::Util::AbstractConfiguration const&) @ 0x160fa348 in /opt/bitnami/clickhouse/bin/clickhouse 9. DB::KeeperDispatcher::initialize(Poco::Util::AbstractConfiguration const&, bool, bool) @ 0x160f99a3 in /opt/bitnami/clickhouse/bin/clickhouse 10. DB::Context::initializeKeeperDispatcher(bool) const @ 0x1491118d in /opt/bitnami/clickhouse/bin/clickhouse 11. DB::Server::main(std::__1::vector, std::__1::allocator >, std::__1::allocator, std::__1::allocator > > > const&) @ 0xa4a3ee0 in /opt/bitnami/clickhouse/bin/clickhouse 12. Poco::Util::Application::run() @ 0x18a51946 in /opt/bitnami/clickhouse/bin/clickhouse 13. DB::Server::run() @ 0xa494e9e in /opt/bitnami/clickhouse/bin/clickhouse 14. mainEntryClickHouseServer(int, char**) @ 0xa492437 in /opt/bitnami/clickhouse/bin/clickhouse 15. main @ 0xa3f138b in /opt/bitnami/clickhouse/bin/clickhouse 16. __libc_start_main @ 0x23d0a in /lib/x86_64-linux-gnu/libc-2.31.so 17. _start @ 0xa1b0a2e in /opt/bitnami/clickhouse/bin/clickhouse (version 22.8.21.38 (official build)) 2026.02.12 03:10:01.352819 [ 45 ] {} Application: Listening for Keeper (tcp): [::]:9181 2026.02.12 03:10:01.356498 [ 45 ] {} Application: Uncompressed cache policy name 2026.02.12 03:10:01.357545 [ 45 ] {} Context: Initialized background executor for merges and mutations with num_threads=16, num_tasks=32 2026.02.12 03:10:01.358192 [ 45 ] {} Context: Initialized background executor for move operations with num_threads=8, num_tasks=8 2026.02.12 03:10:01.364493 [ 45 ] {} Context: Initialized background executor for fetches with num_threads=8, num_tasks=8 2026.02.12 03:10:01.365152 [ 45 ] {} Context: Initialized background executor for common operations (e.g. clearing old parts) with num_threads=8, num_tasks=8 2026.02.12 03:10:01.365499 [ 45 ] {} Application: Loading user defined objects from /bitnami/clickhouse/data/ 2026.02.12 03:10:01.365581 [ 45 ] {} Application: Loading metadata from /bitnami/clickhouse/data/ 2026.02.12 03:10:01.396858 [ 45 ] {} DatabaseAtomic (system): Metadata processed, database system has 3 tables and 0 dictionaries in total. 2026.02.12 03:10:01.396908 [ 45 ] {} TablesLoader: Parsed metadata of 3 tables in 1 databases in 0.030882209 sec 2026.02.12 03:10:01.396941 [ 45 ] {} TablesLoader: Loading 3 tables with 0 dependency level 2026.02.12 03:10:01.512507 [ 45 ] {} DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal. 2026.02.12 03:10:01.512704 [ 45 ] {} DatabaseAtomic (default): Metadata processed, database default has 0 tables and 0 dictionaries in total. 2026.02.12 03:10:01.512722 [ 45 ] {} TablesLoader: Parsed metadata of 0 tables in 1 databases in 4.9101e-05 sec 2026.02.12 03:10:01.512741 [ 45 ] {} TablesLoader: Loading 0 tables with 0 dependency level 2026.02.12 03:10:01.512755 [ 45 ] {} DatabaseAtomic (default): Starting up tables. 2026.02.12 03:10:01.512767 [ 45 ] {} DatabaseAtomic (system): Starting up tables. 2026.02.12 03:10:01.513931 [ 139 ] {} BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 128 threads 2026.02.12 03:10:01.518199 [ 203 ] {} bool DB::(anonymous namespace)::checkPermissionsImpl(): Code: 412. DB::Exception: Can't receive Netlink response: error -2. (NETLINK_ERROR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. ? @ 0xa432ba3 in /opt/bitnami/clickhouse/bin/clickhouse 2. ? @ 0xa432d98 in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::TaskStatsInfoGetter::TaskStatsInfoGetter() @ 0xa4325b7 in /opt/bitnami/clickhouse/bin/clickhouse 4. ? @ 0xa432386 in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::TaskStatsInfoGetter::checkPermissions() @ 0xa432329 in /opt/bitnami/clickhouse/bin/clickhouse 6. DB::TasksStatsCounters::create(unsigned long) @ 0xa42a65d in /opt/bitnami/clickhouse/bin/clickhouse 7. DB::ThreadStatus::initPerformanceCounters() @ 0x150cfda5 in /opt/bitnami/clickhouse/bin/clickhouse 8. DB::ThreadStatus::setupState(std::__1::shared_ptr const&) @ 0x150cf6f9 in /opt/bitnami/clickhouse/bin/clickhouse 9. DB::CurrentThread::initializeQuery() @ 0x150d2360 in /opt/bitnami/clickhouse/bin/clickhouse 10. DB::BackgroundSchedulePool::attachToThreadGroup() @ 0x13f38019 in /opt/bitnami/clickhouse/bin/clickhouse 11. DB::BackgroundSchedulePool::threadFunction() @ 0x13f38176 in /opt/bitnami/clickhouse/bin/clickhouse 12. ? @ 0x13f390cc in /opt/bitnami/clickhouse/bin/clickhouse 13. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xa4c4828 in /opt/bitnami/clickhouse/bin/clickhouse 14. ? @ 0xa4c7a3d in /opt/bitnami/clickhouse/bin/clickhouse 15. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 16. clone @ 0xfba2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.8.21.38 (official build)) 2026.02.12 03:10:01.803833 [ 45 ] {} Application: Tasks stats provider: procfs 2026.02.12 03:10:02.299872 [ 45 ] {} CertificateReloader: One of paths is empty. Cannot apply new configuration for certificates. Fill all paths and try again. 2026.02.12 03:10:02.304659 [ 45 ] {} DNSCacheUpdater: Update period 15 seconds 2026.02.12 03:10:02.304753 [ 45 ] {} Application: Available RAM: 62.79 GiB; physical cores: 8; logical cores: 16. 2026.02.12 03:10:02.305917 [ 45 ] {} Application: Listening for http://[::]:8123 2026.02.12 03:10:02.306025 [ 45 ] {} Application: Listening for native protocol (tcp): [::]:9000 2026.02.12 03:10:02.306091 [ 45 ] {} Application: Listening for MySQL compatibility protocol: [::]:9004 2026.02.12 03:10:02.306162 [ 45 ] {} Application: Listening for PostgreSQL compatibility protocol: [::]:9005 2026.02.12 03:10:02.306247 [ 45 ] {} Application: Listening for Prometheus: http://[::]:8001 2026.02.12 03:10:02.306340 [ 45 ] {} Application: Listening for replica communication (interserver): http://[::]:9009 2026.02.12 03:10:02.306368 [ 45 ] {} Application: Ready for connections. 2026.02.12 03:10:02.308691 [ 141 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:02.309290 [ 141 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:10:02.309325 [ 141 ] {} DNSCacheUpdater: IPs of some hosts have been changed. Will reload cluster config. 2026.02.12 03:10:17.316392 [ 176 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:17.318156 [ 176 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:10:32.321725 [ 258 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:32.323360 [ 258 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:10:47.344244 [ 169 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:10:47.345042 [ 169 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:11:02.349943 [ 234 ] {} DNSResolver: Cannot resolve host (clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local), error 0: Host not found. 2026.02.12 03:11:02.351182 [ 234 ] {} DNSResolver: Cached hosts not found: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:11:02.351217 [ 234 ] {} DNSResolver: Cached hosts dropped: clkhouse-oxscub-backup-ch-keeper-0.clkhouse-oxscub-backup-ch-keeper-headless.ns-nhoig.svc.cluster.local 2026.02.12 03:11:02.351230 [ 234 ] {} DNSCacheUpdater: IPs of some hosts have been changed. Will reload cluster config. 2026.02.12 03:13:32.000973 [ 133 ] {} void DB::AsynchronousMetrics::update(std::chrono::system_clock::time_point): Code: 74. DB::ErrnoException: Cannot read from file: /sys/block/sdz/stat, errno: 19, strerror: No such device. (CANNOT_READ_FROM_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. DB::throwFromErrnoWithPath(std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, int, int) @ 0xa4042a0 in /opt/bitnami/clickhouse/bin/clickhouse 2. DB::ReadBufferFromFileDescriptor::nextImpl() @ 0xa46390e in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::AsynchronousMetrics::BlockDeviceStatValues::read(DB::ReadBuffer&) @ 0x146e2fd1 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::AsynchronousMetrics::update(std::__1::chrono::time_point > >) @ 0x146d7d6f in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::AsynchronousMetrics::run() @ 0x146e24be in /opt/bitnami/clickhouse/bin/clickhouse 6. ? @ 0x146e6bac in /opt/bitnami/clickhouse/bin/clickhouse 7. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xa4c4828 in /opt/bitnami/clickhouse/bin/clickhouse 8. ? @ 0xa4c7a3d in /opt/bitnami/clickhouse/bin/clickhouse 9. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 10. clone @ 0xfba2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.8.21.38 (official build)) 2026.02.12 03:18:27.001229 [ 133 ] {} void DB::AsynchronousMetrics::update(std::chrono::system_clock::time_point): Code: 74. DB::ErrnoException: Cannot read from file: /sys/block/sde/stat, errno: 19, strerror: No such device. (CANNOT_READ_FROM_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. DB::throwFromErrnoWithPath(std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, int, int) @ 0xa4042a0 in /opt/bitnami/clickhouse/bin/clickhouse 2. DB::ReadBufferFromFileDescriptor::nextImpl() @ 0xa46390e in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::AsynchronousMetrics::BlockDeviceStatValues::read(DB::ReadBuffer&) @ 0x146e2fd1 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::AsynchronousMetrics::update(std::__1::chrono::time_point > >) @ 0x146d7d6f in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::AsynchronousMetrics::run() @ 0x146e24be in /opt/bitnami/clickhouse/bin/clickhouse 6. ? @ 0x146e6bac in /opt/bitnami/clickhouse/bin/clickhouse 7. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xa4c4828 in /opt/bitnami/clickhouse/bin/clickhouse 8. ? @ 0xa4c7a3d in /opt/bitnami/clickhouse/bin/clickhouse 9. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 10. clone @ 0xfba2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.8.21.38 (official build)) ------------------------------------------------------------------------------------------------------------------ check backup restore post ready post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B check backup restore post ready exists timeout(B check backup restore post ready done(B  `kbcli cluster describe-backup --names backup-ns-nhoig-clkhouse-oxscub-20260212110403 --namespace ns-nhoig `(B  Name: backup-ns-nhoig-clkhouse-oxscub-20260212110403 Cluster: clkhouse-oxscub Namespace: ns-nhoig Spec: Method: full Policy Name: clkhouse-oxscub-clickhouse-backup-policy Actions: dp-backup-clickhouse-ql2-0: ActionType: Job WorkloadName: dp-backup-clickhouse-ql2-0-backup-ns-nhoig-clkhouse-oxscub-2026 TargetPodName: clkhouse-oxscub-clickhouse-ql2-0 Phase: Completed Start Time: Feb 12,2026 11:04 UTC+0800 Completion Time: Feb 12,2026 11:04 UTC+0800 dp-backup-clickhouse-t2k-0: ActionType: Job WorkloadName: dp-backup-clickhouse-t2k-0-backup-ns-nhoig-clkhouse-oxscub-2026 TargetPodName: clkhouse-oxscub-clickhouse-t2k-0 Phase: Completed Start Time: Feb 12,2026 11:04 UTC+0800 Completion Time: Feb 12,2026 11:04 UTC+0800 Status: Phase: Completed Total Size: 20403 ActionSet Name: clickhouse-full-backup Repository: backuprepo-kbcli-test Duration: 32s Start Time: Feb 12,2026 11:04 UTC+0800 Completion Time: Feb 12,2026 11:04 UTC+0800 Path: /ns-nhoig/clkhouse-oxscub-8c350645-8d4e-4da4-bd4b-7ad4cefd9d09/clickhouse/backup-ns-nhoig-clkhouse-oxscub-20260212110403 Warning Events: delete cluster clkhouse-oxscub-backup  `kbcli cluster delete clkhouse-oxscub-backup --auto-approve --namespace ns-nhoig `(B  pod_info:clkhouse-oxscub-backup-ch-keeper-0 2/2 Running 0 11m clkhouse-oxscub-backup-ch-keeper-1 2/2 Running 0 11m clkhouse-oxscub-backup-ch-keeper-2 2/2 Running 0 11m clkhouse-oxscub-backup-clickhouse-k2w-0 0/2 Init:0/3 0 44s clkhouse-oxscub-backup-clickhouse-k2w-1 0/2 Init:0/3 0 44s clkhouse-oxscub-backup-clickhouse-x5l-0 0/2 Init:0/3 0 44s clkhouse-oxscub-backup-clickhouse-x5l-1 0/2 Init:0/3 0 44s Cluster clkhouse-oxscub-backup deleted delete cluster pod done(B check cluster resource non-exist OK: pvc(B delete cluster done(B cluster delete backup  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge backups backup-ns-nhoig-clkhouse-oxscub-20260212110403 --namespace ns-nhoig `(B  backup.dataprotection.kubeblocks.io/backup-ns-nhoig-clkhouse-oxscub-20260212110403 patched  `kbcli cluster delete-backup clkhouse-oxscub --name backup-ns-nhoig-clkhouse-oxscub-20260212110403 --force --auto-approve --namespace ns-nhoig `(B  Backup backup-ns-nhoig-clkhouse-oxscub-20260212110403 deleted get cluster clkhouse-oxscub shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-oxscub,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-nhoig`(B  set shard component name:clickhouse-ql2 cluster list-logs  `kbcli cluster list-logs clkhouse-oxscub --component clickhouse-ql2 --namespace ns-nhoig `(B  cluster logs  `kbcli cluster logs clkhouse-oxscub --tail 30 --namespace ns-nhoig `(B  2026.02.12 03:04:27.906472 [ 48 ] {} KeeperTCPHandler: Requesting session ID for the new client 2026.02.12 03:04:27.918231 [ 48 ] {} KeeperTCPHandler: Received session ID 46 2026.02.12 03:13:32.001180 [ 256 ] {} void DB::AsynchronousMetrics::update(std::chrono::system_clock::time_point): Code: 74. DB::ErrnoException: Cannot read from file: /sys/block/sdz/stat, errno: 19, strerror: No such device. (CANNOT_READ_FROM_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. DB::throwFromErrnoWithPath(std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, int, int) @ 0xa4042a0 in /opt/bitnami/clickhouse/bin/clickhouse 2. DB::ReadBufferFromFileDescriptor::nextImpl() @ 0xa46390e in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::AsynchronousMetrics::BlockDeviceStatValues::read(DB::ReadBuffer&) @ 0x146e2fd1 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::AsynchronousMetrics::update(std::__1::chrono::time_point > >) @ 0x146d7d6f in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::AsynchronousMetrics::run() @ 0x146e24be in /opt/bitnami/clickhouse/bin/clickhouse 6. ? @ 0x146e6bac in /opt/bitnami/clickhouse/bin/clickhouse 7. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xa4c4828 in /opt/bitnami/clickhouse/bin/clickhouse 8. ? @ 0xa4c7a3d in /opt/bitnami/clickhouse/bin/clickhouse 9. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 10. clone @ 0xfba2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.8.21.38 (official build)) 2026.02.12 03:18:27.001202 [ 256 ] {} void DB::AsynchronousMetrics::update(std::chrono::system_clock::time_point): Code: 74. DB::ErrnoException: Cannot read from file: /sys/block/sde/stat, errno: 19, strerror: No such device. (CANNOT_READ_FROM_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xa40325a in /opt/bitnami/clickhouse/bin/clickhouse 1. DB::throwFromErrnoWithPath(std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, int, int) @ 0xa4042a0 in /opt/bitnami/clickhouse/bin/clickhouse 2. DB::ReadBufferFromFileDescriptor::nextImpl() @ 0xa46390e in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::AsynchronousMetrics::BlockDeviceStatValues::read(DB::ReadBuffer&) @ 0x146e2fd1 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::AsynchronousMetrics::update(std::__1::chrono::time_point > >) @ 0x146d7d6f in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::AsynchronousMetrics::run() @ 0x146e24be in /opt/bitnami/clickhouse/bin/clickhouse 6. ? @ 0x146e6bac in /opt/bitnami/clickhouse/bin/clickhouse 7. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xa4c4828 in /opt/bitnami/clickhouse/bin/clickhouse 8. ? @ 0xa4c7a3d in /opt/bitnami/clickhouse/bin/clickhouse 9. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 10. clone @ 0xfba2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.8.21.38 (official build)) delete cluster clkhouse-oxscub  `kbcli cluster delete clkhouse-oxscub --auto-approve --namespace ns-nhoig `(B  pod_info:clkhouse-oxscub-ch-keeper-0 2/2 Running 0 40m clkhouse-oxscub-ch-keeper-1 2/2 Running 1 (35m ago) 40m clkhouse-oxscub-ch-keeper-2 2/2 Running 0 40m clkhouse-oxscub-clickhouse-ql2-0 2/2 Running 4 (33m ago) 40m clkhouse-oxscub-clickhouse-ql2-1 2/2 Running 6 (35m ago) 40m clkhouse-oxscub-clickhouse-t2k-0 2/2 Running 6 (35m ago) 40m clkhouse-oxscub-clickhouse-t2k-1 2/2 Running 0 40m Cluster clkhouse-oxscub deleted pod_info:clkhouse-oxscub-ch-keeper-0 2/2 Terminating 0 40m delete cluster pod done(B check cluster resource non-exist OK: pvc(B delete cluster done(B Clickhouse Test Suite All Done!(B Test Engine: clickhouse Test Type: 29 --------------------------------------Clickhouse 22.8.21 (Topology = cluster Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[Topology=cluster;ComponentDefinition=clickhouse-1.0.2;ComponentVersion=clickhouse;ServiceVersion=22.8.21;]|[Description=Create a cluster with the specified topology cluster with the specified component definition clickhouse-1.0.2 and component version clickhouse and service version 22.8.21](B [PASSED]|[Connect]|[ComponentName=clickhouse-t2k]|[Description=Connect to the cluster](B [PASSED]|[Restart]|[-]|[Description=Restart the cluster](B [PASSED]|[NoFailover]|[HA=Delete Pod;ComponentName=clickhouse-t2k]|[Description=Simulates conditions where pods terminating forced/graceful thereby testing deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application.](B [PASSED]|[VerticalScaling]|[ComponentName=ch-keeper]|[Description=VerticalScaling the cluster specify component ch-keeper](B [PASSED]|[Scale Out Shard Post]|[ShardsName=clickhouse]|[Description=-](B [PASSED]|[HorizontalScaling Out]|[ShardsName=clickhouse]|[Description=HorizontalScaling Out the cluster](B [PASSED]|[HorizontalScaling In]|[ShardsName=clickhouse]|[Description=HorizontalScaling In the cluster](B [PASSED]|[VolumeExpansion]|[ComponentName=clickhouse]|[Description=VolumeExpansion the cluster specify component clickhouse](B [PASSED]|[NoFailover]|[HA=Kill 1;ComponentName=clickhouse-t2k]|[Description=Simulates conditions where process 1 killed either due to expected/undesired processes thereby testing the application's resilience to unavailability of some replicas due to abnormal termination signals.](B [PASSED]|[Restart]|[ComponentName=clickhouse]|[Description=Restart the cluster specify component clickhouse](B [PASSED]|[VerticalScaling]|[ComponentName=clickhouse]|[Description=VerticalScaling the cluster specify component clickhouse](B [PASSED]|[Restart]|[ComponentName=ch-keeper]|[Description=Restart the cluster specify component ch-keeper](B [PASSED]|[Stop]|[-]|[Description=Stop the cluster](B [PASSED]|[Start]|[-]|[Description=Start the cluster](B [PASSED]|[NoFailover]|[HA=Connection Stress;ComponentName=clickhouse-t2k]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.](B [PASSED]|[HorizontalScaling Out]|[ComponentName=clickhouse]|[Description=HorizontalScaling Out the cluster specify component clickhouse](B [PASSED]|[HorizontalScaling In]|[ComponentName=clickhouse]|[Description=HorizontalScaling In the cluster specify component clickhouse](B [PASSED]|[Connect]|[Endpoints=true]|[Description=Connect to the cluster](B [PASSED]|[VolumeExpansion]|[ComponentName=ch-keeper]|[Description=VolumeExpansion the cluster specify component ch-keeper](B [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut](B [PASSED]|[Backup]|[BackupMethod=full]|[Description=The cluster full Backup](B [FAILED]|[Restore]|[BackupMethod=full]|[Description=The cluster full Restore](B [PASSED]|[Delete Restore Cluster]|[BackupMethod=full]|[Description=Delete the full restore cluster](B [PASSED]|[Delete]|[-]|[Description=Delete the cluster](B [END]