https://github.com/apecloud/kubeblocks/actions/runs/21898070903 previous_version: kubeblocks_version:1.0.2 bash test/kbcli/test_kbcli_1.0.sh --type 29 --version 1.0.2 --generate-output true --chaos-mesh true --aws-access-key-id *** --aws-secret-access-key *** --jihulab-token *** --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files source kubedb files CLUSTER_NAME:  `kubectl get namespace | grep ns-uwpgk `(B   `kubectl create namespace ns-uwpgk`(B  namespace/ns-uwpgk created create namespace ns-uwpgk done(B download kbcli  `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)`(B   `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.2`(B  Your system is linux_amd64 Installing kbcli ... Downloading ... kbcli installed successfully. Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.2 done(B Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Kubernetes Env: v1.32.10 check snapshot controller check snapshot controller done(B POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default (B KubeBlocks version is:1.0.2 skip upgrade KubeBlocks(B current KubeBlocks version: 1.0.2 helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:clickhouse set component version set component version:clickhouse set service versions:25.9.7,25.4.4,24.8.3,22.8.21,22.3.20,22.3.18 set service versions sorted:22.3.18,22.3.20,22.8.21,24.8.3,25.4.4,25.9.7 set clickhouse component definition set clickhouse component definition clickhouse-1.0.2 REPORT_COUNT 0:0 set replicas first:2,22.3.18|2,22.3.20|2,22.8.21|2,24.8.3|2,25.4.4|2,25.9.7 set replicas third:2,22.8.21 set replicas fourth:2,22.3.18 set minimum cmpv service version set minimum cmpv service version replicas:2,22.3.18 set replicas end:2,22.3.18 REPORT_COUNT:1 CLUSTER_TOPOLOGY:cluster cluster definition topology: standalone cluster topology cluster found in cluster definition clickhouse set clickhouse component definition set clickhouse component definition clickhouse-keeper-1.0.2 LIMIT_CPU:0.2 LIMIT_MEMORY:2 storage size: 20 CLUSTER_NAME:clkhouse-icopne pod_info: termination_policy:DoNotTerminate create 2 replica DoNotTerminate clickhouse cluster check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: clickhouse-1.0.2 by component version:clickhouse apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: clkhouse-icopne namespace: ns-uwpgk spec: clusterDef: clickhouse topology: cluster terminationPolicy: DoNotTerminate componentSpecs: - name: ch-keeper serviceVersion: 22.3.18 replicas: 3 disableExporter: false services: - name: default serviceType: ClusterIP systemAccounts: - name: admin passwordConfig: length: 10 numDigits: 5 numSymbols: 0 letterCase: MixedCases seed: clkhouse-icopne resources: requests: cpu: 200m memory: 2Gi limits: cpu: 200m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi shardings: - name: clickhouse shards: 2 template: name: clickhouse serviceVersion: 22.3.18 env: - name: "INIT_CLUSTER_NAME" value: "default" replicas: 2 disableExporter: false services: - name: default serviceType: ClusterIP systemAccounts: - name: admin passwordConfig: length: 10 numDigits: 5 numSymbols: 0 letterCase: MixedCases seed: clkhouse-icopne resources: requests: cpu: 200m memory: 2Gi limits: cpu: 200m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi  `kubectl apply -f test_create_clkhouse-icopne.yaml`(B  cluster.apps.kubeblocks.io/clkhouse-icopne created apply test_create_clkhouse-icopne.yaml Success(B  `rm -rf test_create_clkhouse-icopne.yaml`(B  check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Creating Feb 11,2026 17:44 UTC+0800 clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:48 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:48 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:48 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:48 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-6x4-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check pod clkhouse-icopne-clickhouse-6x4-0 container_name clickhouse exist password VH838l0WO3(B check pod clkhouse-icopne-clickhouse-6x4-0 container_name kbagent exist password VH838l0WO3(B No container logs contain secret password.(B describe cluster  `kbcli cluster describe clkhouse-icopne --namespace ns-uwpgk `(B  Name: clkhouse-icopne Created Time: Feb 11,2026 17:44 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-uwpgk clickhouse cluster Running DoNotTerminate Endpoints: COMPONENT INTERNAL EXTERNAL ch-keeper clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:8123 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:8443 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:9000 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:9009 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:9010 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:8001 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:9181 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:9234 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:9281 clkhouse-icopne-ch-keeper.ns-uwpgk.svc.cluster.local:9440 clickhouse(clickhouse-6x4) clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:8001 clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:8123 clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:8443 clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:9000 clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:9004 clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:9005 clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:9009 clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:9010 clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local:9440 clickhouse(clickhouse-fwl) clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:8001 clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:8123 clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:8443 clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:9000 clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:9004 clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:9005 clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:9009 clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:9010 clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local:9440 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME ch-keeper 22.3.18 clkhouse-icopne-ch-keeper-0 follower Running 0 aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:44 UTC+0800 ch-keeper 22.3.18 clkhouse-icopne-ch-keeper-1 leader Running 0 aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:44 UTC+0800 ch-keeper 22.3.18 clkhouse-icopne-ch-keeper-2 follower Running 0 aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:44 UTC+0800 clickhouse(clickhouse-6x4) 22.3.18 clkhouse-icopne-clickhouse-6x4-0 Running 0 aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:48 UTC+0800 clickhouse(clickhouse-6x4) 22.3.18 clkhouse-icopne-clickhouse-6x4-1 Running 0 aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:48 UTC+0800 clickhouse(clickhouse-fwl) 22.3.18 clkhouse-icopne-clickhouse-fwl-0 Running 0 aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:48 UTC+0800 clickhouse(clickhouse-fwl) 22.3.18 clkhouse-icopne-clickhouse-fwl-1 Running 0 aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:48 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS ch-keeper 200m / 200m 2Gi / 2Gi data:20Gi default clickhouse 200m / 200m 2Gi / 2Gi data:20Gi default Images: COMPONENT COMPONENT-DEFINITION IMAGE ch-keeper clickhouse-keeper-1.0.2 docker.io/apecloud/clickhouse:22.3.18-debian-11-r3 clickhouse clickhouse-1.0.2 docker.io/apecloud/clickhouse:22.3.18-debian-11-r3 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-uwpgk clkhouse-icopne get cluster clkhouse-icopne shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-icopne,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-uwpgk`(B  set shard component name:clickhouse-fwl  `kbcli cluster label clkhouse-icopne app.kubernetes.io/instance- --namespace ns-uwpgk `(B  label "app.kubernetes.io/instance" not found.  `kbcli cluster label clkhouse-icopne app.kubernetes.io/instance=clkhouse-icopne --namespace ns-uwpgk `(B   `kbcli cluster label clkhouse-icopne --list --namespace ns-uwpgk `(B  NAME NAMESPACE LABELS clkhouse-icopne ns-uwpgk app.kubernetes.io/instance=clkhouse-icopne clusterdefinition.kubeblocks.io/name=clickhouse label cluster app.kubernetes.io/instance=clkhouse-icopne Success(B  `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=clkhouse-icopne --namespace ns-uwpgk `(B   `kbcli cluster label clkhouse-icopne --list --namespace ns-uwpgk `(B  NAME NAMESPACE LABELS clkhouse-icopne ns-uwpgk app.kubernetes.io/instance=clkhouse-icopne case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=clickhouse label cluster case.name=kbcli.test1 Success(B  `kbcli cluster label clkhouse-icopne case.name=kbcli.test2 --overwrite --namespace ns-uwpgk `(B   `kbcli cluster label clkhouse-icopne --list --namespace ns-uwpgk `(B  NAME NAMESPACE LABELS clkhouse-icopne ns-uwpgk app.kubernetes.io/instance=clkhouse-icopne case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=clickhouse label cluster case.name=kbcli.test2 Success(B  `kbcli cluster label clkhouse-icopne case.name- --namespace ns-uwpgk `(B   `kbcli cluster label clkhouse-icopne --list --namespace ns-uwpgk `(B  NAME NAMESPACE LABELS clkhouse-icopne ns-uwpgk app.kubernetes.io/instance=clkhouse-icopne clusterdefinition.kubeblocks.io/name=clickhouse delete cluster label case.name Success(B cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT * FROM system.clusters"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash `(B  default 1 1 1 clkhouse-icopne-clickhouse-6x4-0.clkhouse-icopne-clickhouse-6x4-headless.ns-uwpgk.svc.cluster.local 10.244.3.55 9000 0 admin 0 0 0 default 1 1 2 clkhouse-icopne-clickhouse-6x4-1.clkhouse-icopne-clickhouse-6x4-headless.ns-uwpgk.svc.cluster.local 10.244.2.139 9000 0 admin 0 0 0 default 2 1 1 clkhouse-icopne-clickhouse-fwl-0.clkhouse-icopne-clickhouse-fwl-headless.ns-uwpgk.svc.cluster.local 10.244.2.162 9000 0 admin 0 0 0 default 2 1 2 clkhouse-icopne-clickhouse-fwl-1.clkhouse-icopne-clickhouse-fwl-headless.ns-uwpgk.svc.cluster.local 10.244.3.95 9000 1 admin 0 0 0 connect cluster Success(B insert batch data by db client  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-executionloop-clkhouse-icopne --namespace ns-uwpgk `(B   `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-clkhouse-icopne namespace: ns-uwpgk spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local" - "--user" - "admin" - "--password" - "VH838l0WO3" - "--port" - "8123" - "--dbtype" - "clickhouse" - "--test" - "executionloop" - "--duration" - "20" - "--interval" - "1" - "--cluster" - "default" restartPolicy: Never  `kubectl apply -f test-db-client-executionloop-clkhouse-icopne.yaml`(B  pod/test-db-client-executionloop-clkhouse-icopne created apply test-db-client-executionloop-clkhouse-icopne.yaml Success(B  `rm -rf test-db-client-executionloop-clkhouse-icopne.yaml`(B  check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-icopne 1/1 Running 0 5s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-icopne 1/1 Running 0 9s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-icopne 1/1 Running 0 14s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-icopne 1/1 Running 0 20s(B check pod test-db-client-executionloop-clkhouse-icopne status done(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-clkhouse-icopne 0/1 Completed 0 25s(B check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:48 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:48 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:48 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:48 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --user admin --password VH838l0WO3 --port 8123 --dbtype clickhouse --test executionloop --duration 20 --interval 1 --cluster default SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] 09:50:50.239 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** 09:50:50.241 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * This driver is DEPRECATED. Please use [com.clickhouse.jdbc.ClickHouseDriver] instead. * 09:50:50.241 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * Also everything in package [ru.yandex.clickhouse] will be removed starting from 0.4.0. * 09:50:50.241 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** Execution loop start: create database executions_loop CREATE DATABASE IF NOT EXISTS executions_loop ON CLUSTER default; drop distributed table executions_loop_table_distributed DROP TABLE IF EXISTS executions_loop.executions_loop_table_distributed ON CLUSTER default SYNC; drop table executions_loop_table DROP TABLE IF EXISTS executions_loop.executions_loop_table ON CLUSTER default SYNC; create table executions_loop_table CREATE TABLE IF NOT EXISTS executions_loop.executions_loop_table ON CLUSTER default (id UInt32, value String) ENGINE = ReplicatedMergeTree() ORDER BY id; create distributed table executions_loop_table_distributed CREATE TABLE IF NOT EXISTS executions_loop.executions_loop_table_distributed ON CLUSTER default AS executions_loop.executions_loop_table ENGINE = Distributed('default', 'executions_loop', 'executions_loop_table', rand()); Execution loop start:INSERT INTO executions_loop.executions_loop_table_distributed (id, value) VALUES (1, 'executions_loop_test_1'); [ 1s ] executions total: 1 successful: 1 failed: 0 disconnect: 0 [ 2s ] executions total: 27 successful: 27 failed: 0 disconnect: 0 [ 3s ] executions total: 45 successful: 45 failed: 0 disconnect: 0 [ 4s ] executions total: 60 successful: 60 failed: 0 disconnect: 0 [ 5s ] executions total: 82 successful: 82 failed: 0 disconnect: 0 [ 6s ] executions total: 99 successful: 99 failed: 0 disconnect: 0 [ 7s ] executions total: 111 successful: 111 failed: 0 disconnect: 0 [ 8s ] executions total: 126 successful: 126 failed: 0 disconnect: 0 [ 9s ] executions total: 137 successful: 137 failed: 0 disconnect: 0 [ 10s ] executions total: 152 successful: 152 failed: 0 disconnect: 0 [ 11s ] executions total: 168 successful: 168 failed: 0 disconnect: 0 [ 12s ] executions total: 180 successful: 180 failed: 0 disconnect: 0 [ 13s ] executions total: 196 successful: 196 failed: 0 disconnect: 0 [ 14s ] executions total: 213 successful: 213 failed: 0 disconnect: 0 [ 15s ] executions total: 228 successful: 228 failed: 0 disconnect: 0 [ 16s ] executions total: 246 successful: 246 failed: 0 disconnect: 0 [ 17s ] executions total: 266 successful: 266 failed: 0 disconnect: 0 [ 18s ] executions total: 282 successful: 282 failed: 0 disconnect: 0 [ 20s ] executions total: 283 successful: 283 failed: 0 disconnect: 0 Test Result: Total Executions: 283 Successful Executions: 283 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: clickhouse Host: clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local Port: 8123 Database: Table: User: admin Org: Access Mode: mysql Test Type: executionloop Query: Duration: 20 seconds Interval: 1 seconds Cluster: default DB_CLIENT_BATCH_DATA_COUNT: 283  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-executionloop-clkhouse-icopne --namespace ns-uwpgk `(B  pod/test-db-client-executionloop-clkhouse-icopne patched (no change) pod "test-db-client-executionloop-clkhouse-icopne" force deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B set db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  set db_client batch data Success(B check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale clkhouse-icopne --auto-approve --force=true --components clickhouse --cpu 300m --memory 2.1Gi --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-verticalscaling-bxnw9 created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-verticalscaling-bxnw9 -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-verticalscaling-bxnw9 ns-uwpgk VerticalScaling clkhouse-icopne clickhouse Running 0/4 Feb 11,2026 17:51 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:44 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:52 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:51 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:52 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:51 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-verticalscaling-bxnw9 ns-uwpgk VerticalScaling clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 17:51 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-verticalscaling-bxnw9 ns-uwpgk VerticalScaling clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 17:51 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-verticalscaling-bxnw9 --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-verticalscaling-bxnw9 patched  `kbcli cluster delete-ops --name clkhouse-icopne-verticalscaling-bxnw9 --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-verticalscaling-bxnw9 deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B cluster stop check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster stop clkhouse-icopne --auto-approve --force=true --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-stop-tl5k6 created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-stop-tl5k6 -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-stop-tl5k6 ns-uwpgk Stop clkhouse-icopne ch-keeper,clickhouse Running 0/7 Feb 11,2026 17:55 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Stopping Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Stopping(B cluster_status:Stopping(B check cluster status done(B cluster_status:Stopped(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-stop-tl5k6 ns-uwpgk Stop clkhouse-icopne ch-keeper,clickhouse Succeed 7/7 Feb 11,2026 17:55 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-stop-tl5k6 ns-uwpgk Stop clkhouse-icopne ch-keeper,clickhouse Succeed 7/7 Feb 11,2026 17:55 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-stop-tl5k6 --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-stop-tl5k6 patched  `kbcli cluster delete-ops --name clkhouse-icopne-stop-tl5k6 --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-stop-tl5k6 deleted cluster start check cluster status before ops check cluster status done(B cluster_status:Stopped(B  `kbcli cluster start clkhouse-icopne --force=true --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-start-p95ss created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-start-p95ss -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-start-p95ss ns-uwpgk Start clkhouse-icopne ch-keeper,clickhouse Running 0/7 Feb 11,2026 17:56 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-start-p95ss ns-uwpgk Start clkhouse-icopne ch-keeper,clickhouse Succeed 7/7 Feb 11,2026 17:56 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-start-p95ss ns-uwpgk Start clkhouse-icopne ch-keeper,clickhouse Succeed 7/7 Feb 11,2026 17:56 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-start-p95ss --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-start-p95ss patched  `kbcli cluster delete-ops --name clkhouse-icopne-start-p95ss --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-start-p95ss deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B cmpv upgrade service version:2,22.3.18|2,22.3.20|2,22.8.21|2,24.8.3|2,25.4.4|2,25.9.7 cmpv service version upgrade upgrade from:22.3.18 to service version:22.3.20 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-upgrade-cmpv- namespace: ns-uwpgk spec: clusterName: clkhouse-icopne upgrade: components: - componentName: ch-keeper serviceVersion: 22.3.20 type: Upgrade check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-t26b6 created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-t26b6 ns-uwpgk Upgrade clkhouse-icopne ch-keeper Running 0/3 Feb 11,2026 18:04 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-t26b6 ns-uwpgk Upgrade clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:04 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-upgrade-cmpv-t26b6 ns-uwpgk Upgrade clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:04 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-upgrade-cmpv-t26b6 --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-t26b6 patched  `kbcli cluster delete-ops --name clkhouse-icopne-upgrade-cmpv-t26b6 --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-upgrade-cmpv-t26b6 deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B upgrade from:22.3.20 to service version:22.8.21 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-upgrade-cmpv- namespace: ns-uwpgk spec: clusterName: clkhouse-icopne upgrade: components: - componentName: ch-keeper serviceVersion: 22.8.21 type: Upgrade check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-cqv2x created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-cqv2x ns-uwpgk Upgrade clkhouse-icopne ch-keeper Running 0/3 Feb 11,2026 18:06 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-cqv2x ns-uwpgk Upgrade clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:06 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-upgrade-cmpv-cqv2x ns-uwpgk Upgrade clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:06 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-upgrade-cmpv-cqv2x --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-cqv2x patched  `kbcli cluster delete-ops --name clkhouse-icopne-upgrade-cmpv-cqv2x --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-upgrade-cmpv-cqv2x deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B cmpv downgrade service version:22.3.20|22.3.18 cmpv service version downgrade downgrade from:22.8.21 to service version:22.3.20 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-upgrade-cmpv- namespace: ns-uwpgk spec: clusterName: clkhouse-icopne upgrade: components: - componentName: ch-keeper serviceVersion: 22.3.20 type: Upgrade check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-fpfln created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-fpfln ns-uwpgk Upgrade clkhouse-icopne ch-keeper Running 0/3 Feb 11,2026 18:08 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-fpfln ns-uwpgk Upgrade clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:08 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-upgrade-cmpv-fpfln ns-uwpgk Upgrade clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:08 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-upgrade-cmpv-fpfln --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-fpfln patched  `kbcli cluster delete-ops --name clkhouse-icopne-upgrade-cmpv-fpfln --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-upgrade-cmpv-fpfln deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B downgrade from:22.3.20 to service version:22.3.18 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-upgrade-cmpv- namespace: ns-uwpgk spec: clusterName: clkhouse-icopne upgrade: components: - componentName: ch-keeper serviceVersion: 22.3.18 type: Upgrade check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-w5plk created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-w5plk ns-uwpgk Upgrade clkhouse-icopne ch-keeper Running 0/3 Feb 11,2026 18:10 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-w5plk ns-uwpgk Upgrade clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:10 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-upgrade-cmpv-w5plk ns-uwpgk Upgrade clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:10 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-upgrade-cmpv-w5plk --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-w5plk patched  `kbcli cluster delete-ops --name clkhouse-icopne-upgrade-cmpv-w5plk --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-upgrade-cmpv-w5plk deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B test failover fullcpu(B check cluster status before cluster-failover-fullcpu check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge StressChaos test-chaos-mesh-fullcpu-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-fullcpu-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-fwl-0 mode: all stressors: cpu: workers: 100 load: 100 duration: 2m  `kubectl apply -f test-chaos-mesh-fullcpu-clkhouse-icopne.yaml`(B  stresschaos.chaos-mesh.org/test-chaos-mesh-fullcpu-clkhouse-icopne created apply test-chaos-mesh-fullcpu-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-fullcpu-clkhouse-icopne.yaml`(B  fullcpu chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge StressChaos test-chaos-mesh-fullcpu-clkhouse-icopne --namespace ns-uwpgk `(B  stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-fwl-0 failover fullcpu Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B test failover podkill(B check cluster status before cluster-failover-podkill check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge PodChaos test-chaos-mesh-podkill-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podkill-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-fwl-0 mode: all action: pod-kill  `kubectl apply -f test-chaos-mesh-podkill-clkhouse-icopne.yaml`(B  podchaos.chaos-mesh.org/test-chaos-mesh-podkill-clkhouse-icopne created apply test-chaos-mesh-podkill-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-podkill-clkhouse-icopne.yaml`(B  check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge PodChaos test-chaos-mesh-podkill-clkhouse-icopne --namespace ns-uwpgk `(B  podchaos.chaos-mesh.org "test-chaos-mesh-podkill-clkhouse-icopne" force deleted podchaos.chaos-mesh.org/test-chaos-mesh-podkill-clkhouse-icopne patched check failover pod name failover pod name:clkhouse-icopne-clickhouse-fwl-0 failover podkill Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B  `kubectl get pvc -l app.kubernetes.io/instance=clkhouse-icopne,apps.kubeblocks.io/component-name=clickhouse,apps.kubeblocks.io/vct-name=data --namespace ns-uwpgk `(B  clkhouse-icopne clickhouse data pvc is empty cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand clkhouse-icopne --auto-approve --force=true --components clickhouse --volume-claim-templates data --storage 21Gi --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-volumeexpansion-f7j7f created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-volumeexpansion-f7j7f -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-volumeexpansion-f7j7f ns-uwpgk VolumeExpansion clkhouse-icopne clickhouse Running 0/4 Feb 11,2026 18:14 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-volumeexpansion-f7j7f ns-uwpgk VolumeExpansion clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:14 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-volumeexpansion-f7j7f ns-uwpgk VolumeExpansion clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:14 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-volumeexpansion-f7j7f --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-volumeexpansion-f7j7f patched  `kbcli cluster delete-ops --name clkhouse-icopne-volumeexpansion-f7j7f --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-volumeexpansion-f7j7f deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B test failover podfailure(B check cluster status before cluster-failover-podfailure check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge PodChaos test-chaos-mesh-podfailure-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podfailure-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-fwl-0 mode: all action: pod-failure duration: 2m  `kubectl apply -f test-chaos-mesh-podfailure-clkhouse-icopne.yaml`(B  podchaos.chaos-mesh.org/test-chaos-mesh-podfailure-clkhouse-icopne created apply test-chaos-mesh-podfailure-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-podfailure-clkhouse-icopne.yaml`(B  podfailure chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 200m / 200m 2Gi / 2Gi data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge PodChaos test-chaos-mesh-podfailure-clkhouse-icopne --namespace ns-uwpgk `(B  podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-fwl-0 failover podfailure Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale clkhouse-icopne --auto-approve --force=true --components ch-keeper --cpu 300m --memory 2.1Gi --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-verticalscaling-g6wrj created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-verticalscaling-g6wrj -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-verticalscaling-g6wrj ns-uwpgk VerticalScaling clkhouse-icopne ch-keeper Running 0/3 Feb 11,2026 18:23 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-verticalscaling-g6wrj ns-uwpgk VerticalScaling clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:23 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-verticalscaling-g6wrj ns-uwpgk VerticalScaling clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:23 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-verticalscaling-g6wrj --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-verticalscaling-g6wrj patched  `kbcli cluster delete-ops --name clkhouse-icopne-verticalscaling-g6wrj --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-verticalscaling-g6wrj deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B cmpv upgrade service version:2,22.3.18|2,22.3.20|2,22.8.21|2,24.8.3|2,25.4.4|2,25.9.7 cmpv service version upgrade upgrade from:22.3.18 to service version:22.3.20 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-upgrade-cmpv- namespace: ns-uwpgk spec: clusterName: clkhouse-icopne upgrade: components: - componentName: clickhouse serviceVersion: 22.3.20 type: Upgrade check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-fc74s created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-fc74s ns-uwpgk Upgrade clkhouse-icopne clickhouse Running 0/4 Feb 11,2026 18:25 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-fc74s ns-uwpgk Upgrade clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:25 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-upgrade-cmpv-fc74s ns-uwpgk Upgrade clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:25 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-upgrade-cmpv-fc74s --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-fc74s patched  `kbcli cluster delete-ops --name clkhouse-icopne-upgrade-cmpv-fc74s --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-upgrade-cmpv-fc74s deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B upgrade from:22.3.20 to service version:22.8.21 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-upgrade-cmpv- namespace: ns-uwpgk spec: clusterName: clkhouse-icopne upgrade: components: - componentName: clickhouse serviceVersion: 22.8.21 type: Upgrade check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-4mjkh created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-4mjkh ns-uwpgk Upgrade clkhouse-icopne clickhouse Running 0/4 Feb 11,2026 18:26 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-4mjkh ns-uwpgk Upgrade clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:26 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-upgrade-cmpv-4mjkh ns-uwpgk Upgrade clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:26 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-upgrade-cmpv-4mjkh --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-4mjkh patched  `kbcli cluster delete-ops --name clkhouse-icopne-upgrade-cmpv-4mjkh --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-upgrade-cmpv-4mjkh deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B cmpv downgrade service version:22.3.20|22.3.18 cmpv service version downgrade downgrade from:22.8.21 to service version:22.3.20 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-upgrade-cmpv- namespace: ns-uwpgk spec: clusterName: clkhouse-icopne upgrade: components: - componentName: clickhouse serviceVersion: 22.3.20 type: Upgrade check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-2445n created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-2445n ns-uwpgk Upgrade clkhouse-icopne clickhouse Running 0/4 Feb 11,2026 18:27 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-2445n ns-uwpgk Upgrade clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:27 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-upgrade-cmpv-2445n ns-uwpgk Upgrade clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:27 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-upgrade-cmpv-2445n --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-2445n patched  `kbcli cluster delete-ops --name clkhouse-icopne-upgrade-cmpv-2445n --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-upgrade-cmpv-2445n deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B downgrade from:22.3.20 to service version:22.3.18 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-upgrade-cmpv- namespace: ns-uwpgk spec: clusterName: clkhouse-icopne upgrade: components: - componentName: clickhouse serviceVersion: 22.3.18 type: Upgrade check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-lbmsl created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-lbmsl ns-uwpgk Upgrade clkhouse-icopne clickhouse Running 0/4 Feb 11,2026 18:28 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-upgrade-cmpv-lbmsl ns-uwpgk Upgrade clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:28 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-upgrade-cmpv-lbmsl ns-uwpgk Upgrade clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:28 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-upgrade-cmpv-lbmsl --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-upgrade-cmpv-lbmsl patched  `kbcli cluster delete-ops --name clkhouse-icopne-upgrade-cmpv-lbmsl --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-upgrade-cmpv-lbmsl deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-fwl-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-fwl-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-fwl.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-fwl-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B patch clkhouse-icopne shards 3  `kubectl patch cluster clkhouse-icopne --namespace ns-uwpgk --type json -p '[{"op": "replace", "path": "/spec/shardings/0/shards", "value": '3'}]'`(B  cluster.apps.kubeblocks.io/clkhouse-icopne patched get cluster clkhouse-icopne shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-icopne,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-uwpgk`(B  set shard component name:clickhouse-6x4 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:29 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:29 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-6x4-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-6x4-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B job pod status:(B job pod status:(B job pod status:(B check clkhouse-icopne post-provision skip(B cluster custom-ops post-scale-out-shard-for-clickhouse apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: clkhouse-icopne-custom- namespace: ns-uwpgk spec: type: Custom clusterName: clkhouse-icopne force: true custom: components: - componentName: clickhouse maxConcurrentComponents: 0 opsDefinitionName: post-scale-out-shard-for-clickhouse check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_clkhouse-icopne.yaml`(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-custom-rnnlj created create test_ops_cluster_clkhouse-icopne.yaml Success(B  `rm -rf test_ops_cluster_clkhouse-icopne.yaml`(B  check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-custom-rnnlj ns-uwpgk Custom clkhouse-icopne clickhouse Running 0/1 Feb 11,2026 18:35 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:29 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:29 UTC+0800 clkhouse-icopne-clickhouse-fwl-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:14 UTC+0800 clkhouse-icopne-clickhouse-fwl-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-fwl) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-6x4-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-6x4-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-custom-rnnlj ns-uwpgk Custom clkhouse-icopne clickhouse Succeed 1/1 Feb 11,2026 18:35 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-custom-rnnlj ns-uwpgk Custom clkhouse-icopne clickhouse Succeed 1/1 Feb 11,2026 18:35 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-custom-rnnlj --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-custom-rnnlj patched  `kbcli cluster delete-ops --name clkhouse-icopne-custom-rnnlj --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-custom-rnnlj deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-6x4-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-6x4-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-6x4-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-6x4-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-6x4.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-6x4-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [283] equal [283] data Success(B patch clkhouse-icopne shards 2  `kubectl patch cluster clkhouse-icopne --namespace ns-uwpgk --type json -p '[{"op": "replace", "path": "/spec/shardings/0/shards", "value": '2'}]'`(B  cluster.apps.kubeblocks.io/clkhouse-icopne patched get cluster clkhouse-icopne shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-icopne,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-uwpgk`(B  set shard component name:clickhouse-7gx check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:29 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:29 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B job pod status:(B job pod status:(B job pod status:(B check clkhouse-icopne pre-terminate skip(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B set db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  set db_client batch data retry times: 1(B set db_client batch data retry times: 2(B set db_client batch data retry times: 3(B set db_client batch data retry times: 4(B set db_client batch data retry times: 5(B set db_client batch data retry times: 6(B set db_client batch data retry times: 7(B set db_client batch data retry times: 8(B set db_client batch data retry times: 9(B set db_client batch data retry times: 10(B set db_client batch data Failure(B set DB_CLIENT_BATCH_DATA_COUNT: 158  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover networklossover(B check cluster status before cluster-failover-networklossover check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networklossover-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networklossover-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 mode: all action: loss loss: loss: '100' correlation: '100' direction: to duration: 2m  `kubectl apply -f test-chaos-mesh-networklossover-clkhouse-icopne.yaml`(B  networkchaos.chaos-mesh.org/test-chaos-mesh-networklossover-clkhouse-icopne created apply test-chaos-mesh-networklossover-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-networklossover-clkhouse-icopne.yaml`(B  networklossover chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:29 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:29 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networklossover-clkhouse-icopne --namespace ns-uwpgk `(B  networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover networklossover Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover dnsrandom(B check cluster status before cluster-failover-dnsrandom check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge DNSChaos test-chaos-mesh-dnsrandom-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnsrandom-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 mode: all action: random duration: 2m  `kubectl apply -f test-chaos-mesh-dnsrandom-clkhouse-icopne.yaml`(B  dnschaos.chaos-mesh.org/test-chaos-mesh-dnsrandom-clkhouse-icopne created apply test-chaos-mesh-dnsrandom-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-dnsrandom-clkhouse-icopne.yaml`(B  dnsrandom chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:29 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:29 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge DNSChaos test-chaos-mesh-dnsrandom-clkhouse-icopne --namespace ns-uwpgk `(B  dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover dnsrandom Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover timeoffset(B check cluster status before cluster-failover-timeoffset check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge TimeChaos test-chaos-mesh-timeoffset-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: TimeChaos metadata: name: test-chaos-mesh-timeoffset-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 mode: all timeOffset: '-10m' clockIds: - CLOCK_REALTIME duration: 2m  `kubectl apply -f test-chaos-mesh-timeoffset-clkhouse-icopne.yaml`(B  timechaos.chaos-mesh.org/test-chaos-mesh-timeoffset-clkhouse-icopne created apply test-chaos-mesh-timeoffset-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-timeoffset-clkhouse-icopne.yaml`(B  timeoffset chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:25 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:24 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:23 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:56 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:29 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:29 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge TimeChaos test-chaos-mesh-timeoffset-clkhouse-icopne --namespace ns-uwpgk `(B  timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-clkhouse-icopne" force deleted timechaos.chaos-mesh.org/test-chaos-mesh-timeoffset-clkhouse-icopne patched check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover timeoffset Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart clkhouse-icopne --auto-approve --force=true --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-restart-cks52 created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-restart-cks52 -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-restart-cks52 ns-uwpgk Restart clkhouse-icopne ch-keeper,clickhouse Running 0/7 Feb 11,2026 18:43 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:44 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:43 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:44 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:43 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:44 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:43 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-restart-cks52 ns-uwpgk Restart clkhouse-icopne ch-keeper,clickhouse Succeed 7/7 Feb 11,2026 18:43 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-restart-cks52 ns-uwpgk Restart clkhouse-icopne ch-keeper,clickhouse Succeed 7/7 Feb 11,2026 18:43 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-restart-cks52 --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-restart-cks52 patched  `kbcli cluster delete-ops --name clkhouse-icopne-restart-cks52 --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-restart-cks52 deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart clkhouse-icopne --auto-approve --force=true --components clickhouse --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-restart-rpl6d created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-restart-rpl6d -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-restart-rpl6d ns-uwpgk Restart clkhouse-icopne clickhouse Running 0/4 Feb 11,2026 18:45 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:44 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:43 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-restart-rpl6d ns-uwpgk Restart clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:45 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-restart-rpl6d ns-uwpgk Restart clkhouse-icopne clickhouse Succeed 4/4 Feb 11,2026 18:45 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-restart-rpl6d --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-restart-rpl6d patched  `kbcli cluster delete-ops --name clkhouse-icopne-restart-rpl6d --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-restart-rpl6d deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B  `kubectl get pvc -l app.kubernetes.io/instance=clkhouse-icopne,apps.kubeblocks.io/component-name=ch-keeper,apps.kubeblocks.io/vct-name=data --namespace ns-uwpgk `(B  cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand clkhouse-icopne --auto-approve --force=true --components ch-keeper --volume-claim-templates data --storage 22Gi --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-volumeexpansion-7m4s7 created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-volumeexpansion-7m4s7 -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-volumeexpansion-7m4s7 ns-uwpgk VolumeExpansion clkhouse-icopne ch-keeper Running 0/3 Feb 11,2026 18:47 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:44 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:43 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-volumeexpansion-7m4s7 ns-uwpgk VolumeExpansion clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:47 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-volumeexpansion-7m4s7 ns-uwpgk VolumeExpansion clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:47 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-volumeexpansion-7m4s7 --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-volumeexpansion-7m4s7 patched  `kbcli cluster delete-ops --name clkhouse-icopne-volumeexpansion-7m4s7 --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-volumeexpansion-7m4s7 deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover networkduplicate(B check cluster status before cluster-failover-networkduplicate check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkduplicate-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 mode: all action: duplicate duplicate: duplicate: '100' correlation: '100' direction: to duration: 2m  `kubectl apply -f test-chaos-mesh-networkduplicate-clkhouse-icopne.yaml`(B  networkchaos.chaos-mesh.org/test-chaos-mesh-networkduplicate-clkhouse-icopne created apply test-chaos-mesh-networkduplicate-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-networkduplicate-clkhouse-icopne.yaml`(B  networkduplicate chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:44 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:43 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-clkhouse-icopne --namespace ns-uwpgk `(B  networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover networkduplicate Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover kill1(B check cluster status before cluster-failover-kill1 check cluster status done(B cluster_status:Running(B  `kill 1`(B  exec return message: check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:44 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:43 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover kill1 Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart clkhouse-icopne --auto-approve --force=true --components ch-keeper --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-restart-j4dvv created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-restart-j4dvv -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-restart-j4dvv ns-uwpgk Restart clkhouse-icopne ch-keeper Running 0/3 Feb 11,2026 18:54 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-restart-j4dvv ns-uwpgk Restart clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:54 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-restart-j4dvv ns-uwpgk Restart clkhouse-icopne ch-keeper Succeed 3/3 Feb 11,2026 18:54 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-restart-j4dvv --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-restart-j4dvv patched  `kbcli cluster delete-ops --name clkhouse-icopne-restart-j4dvv --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-restart-j4dvv deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover networkbandwidthover(B check cluster status before cluster-failover-networkbandwidthover check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkbandwidthover-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 action: bandwidth mode: all bandwidth: rate: '1bps' limit: 20971520 buffer: 10000 duration: 2m  `kubectl apply -f test-chaos-mesh-networkbandwidthover-clkhouse-icopne.yaml`(B  networkchaos.chaos-mesh.org/test-chaos-mesh-networkbandwidthover-clkhouse-icopne created apply test-chaos-mesh-networkbandwidthover-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-networkbandwidthover-clkhouse-icopne.yaml`(B  networkbandwidthover chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-clkhouse-icopne --namespace ns-uwpgk `(B  networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover networkbandwidthover Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "CREATE TABLE test_kbcli (id Int32,name String) ENGINE = MergeTree() ORDER BY id;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B   `echo "clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password \"VH838l0WO3\" --query \"INSERT INTO test_kbcli VALUES (1,'ieeny');\" " | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B   `clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT * FROM test_kbcli;"`(B  exec return msg: exec return msg: exec return msg: exec return msg: exec return msg:1 ieeny check msg:[ieeny] equal msg:[1 ieeny](B test failover oom(B check cluster status before cluster-failover-oom check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge StressChaos test-chaos-mesh-oom-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-oom-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 mode: all stressors: memory: workers: 1 size: "100GB" oomScoreAdj: -1000 duration: 2m  `kubectl apply -f test-chaos-mesh-oom-clkhouse-icopne.yaml`(B  stresschaos.chaos-mesh.org/test-chaos-mesh-oom-clkhouse-icopne created apply test-chaos-mesh-oom-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-oom-clkhouse-icopne.yaml`(B  check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge StressChaos test-chaos-mesh-oom-clkhouse-icopne --namespace ns-uwpgk `(B  stresschaos.chaos-mesh.org "test-chaos-mesh-oom-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover oom Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B cluster clickhouse scale-out cluster clickhouse scale-out replicas: 3 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-out clkhouse-icopne --auto-approve --force=true --components clickhouse --replicas 1 --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-horizontalscaling-zbrxc created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-horizontalscaling-zbrxc -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-horizontalscaling-zbrxc ns-uwpgk HorizontalScaling clkhouse-icopne clickhouse Running 0/2 Feb 11,2026 19:00 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Updating Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-6x4-2 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 19:00 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-2 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 19:00 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-horizontalscaling-zbrxc ns-uwpgk HorizontalScaling clkhouse-icopne clickhouse Succeed 2/2 Feb 11,2026 19:00 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-horizontalscaling-zbrxc ns-uwpgk HorizontalScaling clkhouse-icopne clickhouse Succeed 2/2 Feb 11,2026 19:00 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-horizontalscaling-zbrxc --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-horizontalscaling-zbrxc patched  `kbcli cluster delete-ops --name clkhouse-icopne-horizontalscaling-zbrxc --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-horizontalscaling-zbrxc deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B cluster clickhouse scale-in cluster clickhouse scale-in replicas: 2 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-in clkhouse-icopne --auto-approve --force=true --components clickhouse --replicas 1 --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-horizontalscaling-btzx5 created successfully, you can view the progress: kbcli cluster describe-ops clkhouse-icopne-horizontalscaling-btzx5 -n ns-uwpgk check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-horizontalscaling-btzx5 ns-uwpgk HorizontalScaling clkhouse-icopne clickhouse Running 0/2 Feb 11,2026 19:01 UTC+0800 check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops clkhouse-icopne --status all --namespace ns-uwpgk `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME clkhouse-icopne-horizontalscaling-btzx5 ns-uwpgk HorizontalScaling clkhouse-icopne clickhouse Succeed 2/2 Feb 11,2026 19:01 UTC+0800 check ops status done(B ops_status:clkhouse-icopne-horizontalscaling-btzx5 ns-uwpgk HorizontalScaling clkhouse-icopne clickhouse Succeed 2/2 Feb 11,2026 19:01 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations clkhouse-icopne-horizontalscaling-btzx5 --namespace ns-uwpgk `(B  opsrequest.operations.kubeblocks.io/clkhouse-icopne-horizontalscaling-btzx5 patched  `kbcli cluster delete-ops --name clkhouse-icopne-horizontalscaling-btzx5 --force --auto-approve --namespace ns-uwpgk `(B  OpsRequest clkhouse-icopne-horizontalscaling-btzx5 deleted  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover networkcorruptover(B check cluster status before cluster-failover-networkcorruptover check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkcorruptover-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 mode: all action: corrupt corrupt: corrupt: '100' correlation: '100' direction: to duration: 2m  `kubectl apply -f test-chaos-mesh-networkcorruptover-clkhouse-icopne.yaml`(B  networkchaos.chaos-mesh.org/test-chaos-mesh-networkcorruptover-clkhouse-icopne created apply test-chaos-mesh-networkcorruptover-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-networkcorruptover-clkhouse-icopne.yaml`(B  networkcorruptover chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-clkhouse-icopne --namespace ns-uwpgk `(B  networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover networkcorruptover Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover connectionstress(B check cluster status before cluster-failover-connectionstress check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-connectionstress-clkhouse-icopne --namespace ns-uwpgk `(B   `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-clkhouse-icopne namespace: ns-uwpgk spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local" - "--user" - "admin" - "--password" - "VH838l0WO3" - "--port" - "8123" - "--database" - "default" - "--dbtype" - "clickhouse" - "--test" - "connectionstress" - "--connections" - "4096" - "--duration" - "20" - "--cluster" - "default" restartPolicy: Never  `kubectl apply -f test-db-client-connectionstress-clkhouse-icopne.yaml`(B  pod/test-db-client-connectionstress-clkhouse-icopne created apply test-db-client-connectionstress-clkhouse-icopne.yaml Success(B  `rm -rf test-db-client-connectionstress-clkhouse-icopne.yaml`(B  check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-icopne 1/1 Running 0 5s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-icopne 1/1 Running 0 9s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-icopne 1/1 Running 0 14s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-icopne 1/1 Running 0 20s(B check pod test-db-client-connectionstress-clkhouse-icopne status done(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-clkhouse-icopne 0/1 Completed 0 25s(B check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --user admin --password VH838l0WO3 --port 8123 --database default --dbtype clickhouse --test connectionstress --connections 4096 --duration 20 --cluster default SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] 11:04:08.291 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** 11:04:08.293 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * This driver is DEPRECATED. Please use [com.clickhouse.jdbc.ClickHouseDriver] instead. * 11:04:08.293 [main] WARN r.yandex.clickhouse.ClickHouseDriver - * Also everything in package [ru.yandex.clickhouse] will be removed starting from 0.4.0. * 11:04:08.293 [main] WARN r.yandex.clickhouse.ClickHouseDriver - ****************************************************************************************** Test Result: null Connection Information: Database Type: clickhouse Host: clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local Port: 8123 Database: default Table: User: admin Org: Access Mode: mysql Test Type: connectionstress Connection Count: 4096 Duration: 20 seconds  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-connectionstress-clkhouse-icopne --namespace ns-uwpgk `(B  pod/test-db-client-connectionstress-clkhouse-icopne patched (no change) pod "test-db-client-connectionstress-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover connectionstress Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover networkpartition(B check cluster status before cluster-failover-networkpartition check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkpartition-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkpartition-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 action: partition mode: all target: mode: all selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 direction: to duration: 2m  `kubectl apply -f test-chaos-mesh-networkpartition-clkhouse-icopne.yaml`(B  networkchaos.chaos-mesh.org/test-chaos-mesh-networkpartition-clkhouse-icopne created apply test-chaos-mesh-networkpartition-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-networkpartition-clkhouse-icopne.yaml`(B  networkpartition chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkpartition-clkhouse-icopne --namespace ns-uwpgk `(B  networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover networkpartition Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover dnserror(B check cluster status before cluster-failover-dnserror check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge DNSChaos test-chaos-mesh-dnserror-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnserror-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 mode: all action: error duration: 2m  `kubectl apply -f test-chaos-mesh-dnserror-clkhouse-icopne.yaml`(B  dnschaos.chaos-mesh.org/test-chaos-mesh-dnserror-clkhouse-icopne created apply test-chaos-mesh-dnserror-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-dnserror-clkhouse-icopne.yaml`(B  dnserror chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge DNSChaos test-chaos-mesh-dnserror-clkhouse-icopne --namespace ns-uwpgk `(B  dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover dnserror Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B test failover networkdelay(B check cluster status before cluster-failover-networkdelay check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkdelay-clkhouse-icopne --namespace ns-uwpgk `(B  apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkdelay-clkhouse-icopne namespace: ns-uwpgk spec: selector: namespaces: - ns-uwpgk labelSelectors: apps.kubeblocks.io/pod-name: clkhouse-icopne-clickhouse-7gx-0 mode: all action: delay delay: latency: 2000ms correlation: '100' jitter: 0ms direction: to duration: 2m  `kubectl apply -f test-chaos-mesh-networkdelay-clkhouse-icopne.yaml`(B  networkchaos.chaos-mesh.org/test-chaos-mesh-networkdelay-clkhouse-icopne created apply test-chaos-mesh-networkdelay-clkhouse-icopne.yaml Success(B  `rm -rf test-chaos-mesh-networkdelay-clkhouse-icopne.yaml`(B  networkdelay chaos test waiting 120 seconds check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse DoNotTerminate Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge NetworkChaos test-chaos-mesh-networkdelay-clkhouse-icopne --namespace ns-uwpgk `(B  networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-clkhouse-icopne" force deleted check failover pod name failover pod name:clkhouse-icopne-clickhouse-7gx-0 failover networkdelay Success(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check db_client batch data count  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT count(*) FROM executions_loop.executions_loop_table;"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check db_client batch [158] equal [158] data Success(B cluster update terminationPolicy WipeOut  `kbcli cluster update clkhouse-icopne --termination-policy=WipeOut --namespace ns-uwpgk `(B  cluster.apps.kubeblocks.io/clkhouse-icopne updated check cluster status  `kbcli cluster list clkhouse-icopne --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne ns-uwpgk clickhouse WipeOut Running Feb 11,2026 17:44 UTC+0800 app.kubernetes.io/instance=clkhouse-icopne,clusterdefinition.kubeblocks.io/name=clickhouse check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-ch-keeper-0 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:55 UTC+0800 clkhouse-icopne-ch-keeper-1 ns-uwpgk clkhouse-icopne ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-ch-keeper-2 ns-uwpgk clkhouse-icopne ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:54 UTC+0800 clkhouse-icopne-clickhouse-6x4-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-6x4-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-6x4) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 clkhouse-icopne-clickhouse-7gx-0 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:46 UTC+0800 clkhouse-icopne-clickhouse-7gx-1 ns-uwpgk clkhouse-icopne clickhouse(clickhouse-7gx) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:45 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne`(B  set secret: clkhouse-icopne-clickhouse-7gx-account-admin  `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-clickhouse-7gx-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-clickhouse-7gx.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-clickhouse-7gx-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B cluster full backup  `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.name}"`(B   `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.namespace}"`(B   `kubectl get secrets kb-backuprepo-hnrh8 -n kb-qxtxx -o jsonpath="{.data.accessKeyId}"`(B   `kubectl get secrets kb-backuprepo-hnrh8 -n kb-qxtxx -o jsonpath="{.data.secretAccessKey}"`(B  KUBEBLOCKS NAMESPACE:kb-qxtxx get kubeblocks namespace done(B  `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-qxtxx -o jsonpath="{.items[0].data.root-user}"`(B   `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-qxtxx -o jsonpath="{.items[0].data.root-password}"`(B  minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 list minio bucket kbcli-test  `echo 'mc alias set minioserver http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-6c77985d5f-55lsm --namespace kb-qxtxx -- bash`(B  list minio bucket done(B default backuprepo:backuprepo-kbcli-test exists(B  `kbcli cluster backup clkhouse-icopne --method full --namespace ns-uwpgk `(B  Backup backup-ns-uwpgk-clkhouse-icopne-20260211191142 created successfully, you can view the progress: kbcli cluster list-backups --names=backup-ns-uwpgk-clkhouse-icopne-20260211191142 -n ns-uwpgk check backup status  `kbcli cluster list-backups clkhouse-icopne --namespace ns-uwpgk `(B  NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-uwpgk-clkhouse-icopne-20260211191142 ns-uwpgk clkhouse-icopne full Running Delete Feb 11,2026 19:11 UTC+0800 backup_status:clkhouse-icopne-full-Running(B backup_status:clkhouse-icopne-full-Running(B backup_status:clkhouse-icopne-full-Running(B backup_status:clkhouse-icopne-full-Running(B backup_status:clkhouse-icopne-full-Running(B check backup status done(B backup_status:backup-ns-uwpgk-clkhouse-icopne-20260211191142 ns-uwpgk clkhouse-icopne full Completed 20541 23s Delete Feb 11,2026 19:11 UTC+0800 Feb 11,2026 19:12 UTC+0800 (B cluster restore backup  `kbcli cluster describe-backup --names backup-ns-uwpgk-clkhouse-icopne-20260211191142 --namespace ns-uwpgk `(B  Name: backup-ns-uwpgk-clkhouse-icopne-20260211191142 Cluster: clkhouse-icopne Namespace: ns-uwpgk Spec: Method: full Policy Name: clkhouse-icopne-clickhouse-backup-policy Actions: dp-backup-clickhouse-7gx-0: ActionType: Job WorkloadName: dp-backup-clickhouse-7gx-0-backup-ns-uwpgk-clkhouse-icopne-2026 TargetPodName: clkhouse-icopne-clickhouse-7gx-0 Phase: Completed Start Time: Feb 11,2026 19:11 UTC+0800 Completion Time: Feb 11,2026 19:12 UTC+0800 dp-backup-clickhouse-6x4-0: ActionType: Job WorkloadName: dp-backup-clickhouse-6x4-0-backup-ns-uwpgk-clkhouse-icopne-2026 TargetPodName: clkhouse-icopne-clickhouse-6x4-0 Phase: Completed Start Time: Feb 11,2026 19:11 UTC+0800 Completion Time: Feb 11,2026 19:12 UTC+0800 Status: Phase: Completed Total Size: 20541 ActionSet Name: clickhouse-full-backup Repository: backuprepo-kbcli-test Duration: 23s Start Time: Feb 11,2026 19:11 UTC+0800 Completion Time: Feb 11,2026 19:12 UTC+0800 Path: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142 Warning Events:  `kbcli cluster restore clkhouse-icopne-backup --backup backup-ns-uwpgk-clkhouse-icopne-20260211191142 --namespace ns-uwpgk `(B  Cluster clkhouse-icopne-backup created check cluster status  `kbcli cluster list clkhouse-icopne-backup --show-labels --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS clkhouse-icopne-backup ns-uwpgk clickhouse WipeOut Creating Feb 11,2026 19:12 UTC+0800 clusterdefinition.kubeblocks.io/name=clickhouse cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B get cluster clkhouse-icopne-backup shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-icopne-backup,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-uwpgk`(B  set shard component name:clickhouse-5lm check pod status  `kbcli cluster list-instances clkhouse-icopne-backup --namespace ns-uwpgk `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME clkhouse-icopne-backup-ch-keeper-0 ns-uwpgk clkhouse-icopne-backup ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 19:12 UTC+0800 clkhouse-icopne-backup-ch-keeper-1 ns-uwpgk clkhouse-icopne-backup ch-keeper Running follower 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 19:12 UTC+0800 clkhouse-icopne-backup-ch-keeper-2 ns-uwpgk clkhouse-icopne-backup ch-keeper Running leader 0 300m / 300m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 19:12 UTC+0800 clkhouse-icopne-backup-clickhouse-5lm-0 ns-uwpgk clkhouse-icopne-backup clickhouse(clickhouse-5lm) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 19:13 UTC+0800 clkhouse-icopne-backup-clickhouse-5lm-1 ns-uwpgk clkhouse-icopne-backup clickhouse(clickhouse-5lm) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 19:13 UTC+0800 clkhouse-icopne-backup-clickhouse-ngn-0 ns-uwpgk clkhouse-icopne-backup clickhouse(clickhouse-ngn) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 19:13 UTC+0800 clkhouse-icopne-backup-clickhouse-ngn-1 ns-uwpgk clkhouse-icopne-backup clickhouse(clickhouse-ngn) Running 0 300m / 300m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 19:13 UTC+0800 check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne-backup`(B  set secret: clkhouse-icopne-backup-clickhouse-5lm-account-admin  `kubectl get secrets clkhouse-icopne-backup-clickhouse-5lm-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-backup-clickhouse-5lm-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-backup-clickhouse-5lm-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B check cluster connect  `echo 'clickhouse-client --host clkhouse-icopne-backup-clickhouse-5lm.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3"' | kubectl exec -it clkhouse-icopne-backup-clickhouse-5lm-0 --namespace ns-uwpgk -- bash`(B  check cluster connect done(B check backup restore post ready check backup restore post ready exists(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 2/2 Running 0 54s restore-post-ready-0b0f3ae4-backup-ns-uwpgk-clkhouse-icop-d6h9j 0/2 PodInitializing 0 9s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 2/2 Running 0 58s restore-post-ready-0b0f3ae4-backup-ns-uwpgk-clkhouse-icop-d6h9j 2/2 Running 0 13s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 2/2 Running 0 64s restore-post-ready-0b0f3ae4-backup-ns-uwpgk-clkhouse-icop-d6h9j 2/2 Running 0 19s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 2/2 Running 0 69s restore-post-ready-0b0f3ae4-backup-ns-uwpgk-clkhouse-icop-d6h9j 2/2 Running 0 24s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 2/2 Running 0 74s restore-post-ready-0b0f3ae4-backup-ns-uwpgk-clkhouse-icop-d6h9j 2/2 Running 0 29s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 2/2 Running 0 80s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 85s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 90s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 96s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 1/2 Error 0 4s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 101s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 9s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 106s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 14s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 112s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 20s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 117s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 25s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m2s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 30s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m8s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 36s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m13s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 41s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m18s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 46s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m24s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 52s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m29s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 57s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m34s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 62s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m40s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 68s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m45s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 73s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m50s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 78s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 2m56s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 84s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m1s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 89s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m6s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 94s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m12s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 100s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m17s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 105s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m22s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 110s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m28s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 116s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m33s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m1s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m38s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m6s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m44s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m12s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m49s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m17s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 3m54s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m22s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m28s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m5s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m33s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m10s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m38s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m15s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m43s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m21s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m49s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m26s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m54s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m31s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 2m59s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m37s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m5s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m42s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m10s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m47s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m15s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m53s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m21s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 4m58s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m26s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m3s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m31s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m9s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m37s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m14s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m42s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m19s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m47s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m25s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m53s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m30s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 3m58s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m35s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 4m3s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m41s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 4m9s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m46s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 4m14s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m51s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 4m19s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 5m57s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 4m25s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 6m2s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 4m30s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 6m7s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 4m35s(B post_ready_pod_status:restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn 0/2 Error 0 6m13s restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 0/2 Error 0 4m41s(B [Error] check backup restore post ready timeout(B --------------------------------------get pod restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 yaml--------------------------------------  `kubectl get pod restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn -o yaml --namespace ns-uwpgk `(B  apiVersion: v1 kind: Pod metadata: annotations: dataprotection.kubeblocks.io/stop-restore-manager: "true" creationTimestamp: "2026-02-11T11:13:51Z" generateName: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0- labels: app.kubernetes.io/managed-by: kubeblocks-dataprotection batch.kubernetes.io/controller-uid: b0539470-d058-40af-94c5-e989ce4bf597 batch.kubernetes.io/job-name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 controller-uid: b0539470-d058-40af-94c5-e989ce4bf597 dataprotection.kubeblocks.io/restore: clkhouse-icopne-backup-clickhouse-ngn-5362cf29-postready job-name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn namespace: ns-uwpgk ownerReferences: - apiVersion: batch/v1 blockOwnerDeletion: true controller: true kind: Job name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 uid: b0539470-d058-40af-94c5-e989ce4bf597 resourceVersion: "120447" uid: 21aba319-0d50-48e6-a81a-f7413c30662a spec: containers: - command: - bash - -c - "#!/bin/bash\n# log info file\nfunction DP_log() {\n\tmsg=$1\n\tlocal curr_date=$(date -u '+%Y-%m-%d %H:%M:%S')\n\techo \"${curr_date} INFO: $msg\"\n}\n\n# log error info\nfunction DP_error_log() {\n\tmsg=$1\n\tlocal curr_date=$(date -u '+%Y-%m-%d %H:%M:%S')\n\techo \"${curr_date} ERROR: $msg\"\n}\n\n# Get file names without extensions based on the incoming file path\nfunction DP_get_file_name_without_ext() {\n\tlocal fileName=$1\n\tlocal file_without_ext=${fileName%.*}\n\techo $(basename ${file_without_ext})\n}\n\n# Save backup status info file for syncing progress.\n# timeFormat: %Y-%m-%dT%H:%M:%SZ\nfunction DP_save_backup_status_info() {\n\tlocal totalSize=$1\n\tlocal startTime=$2\n\tlocal stopTime=$3\n\tlocal timeZone=$4\n\tlocal extras=$5\n\tlocal timeZoneStr=\"\"\n\tif [ ! -z ${timeZone} ]; then\n\t\ttimeZoneStr=\",\\\"timeZone\\\":\\\"${timeZone}\\\"\"\n\tfi\n\tif [ -z \"${stopTime}\" ]; then\n\t\techo \"{\\\"totalSize\\\":\\\"${totalSize}\\\"}\" >${DP_BACKUP_INFO_FILE}\n\telif [ -z \"${startTime}\" ]; then\n\t\techo \"{\\\"totalSize\\\":\\\"${totalSize}\\\",\\\"extras\\\":[${extras}],\\\"timeRange\\\":{\\\"end\\\":\\\"${stopTime}\\\"${timeZoneStr}}}\" >${DP_BACKUP_INFO_FILE}\n\telse\n\t\techo \"{\\\"totalSize\\\":\\\"${totalSize}\\\",\\\"extras\\\":[${extras}],\\\"timeRange\\\":{\\\"start\\\":\\\"${startTime}\\\",\\\"end\\\":\\\"${stopTime}\\\"${timeZoneStr}}}\" >${DP_BACKUP_INFO_FILE}\n\tfi\n}\n\n# Clean up expired logfiles.\n# Default interval is 60s\n# Default rootPath is /\nfunction DP_purge_expired_files() {\n\tlocal currentUnix=\"${1:?missing current unix}\"\n\tlocal last_purge_time=\"${2:?missing last_purge_time}\"\n\tlocal root_path=${3:-\"/\"}\n\tlocal interval_seconds=${4:-60}\n\tlocal diff_time=$((${currentUnix} - ${last_purge_time}))\n\tif [[ -z ${DP_TTL_SECONDS} || ${diff_time} -lt ${interval_seconds} ]]; then\n\t\treturn\n\tfi\n\texpiredUnix=$((${currentUnix} - ${DP_TTL_SECONDS}))\n\tfiles=$(datasafed list -f --recursive --older-than ${expiredUnix} ${root_path})\n\tfor file in \"${files[@]}\"; do\n\t\tdatasafed rm \"$file\"\n\t\techo \"$file\"\n\tdone\n}\n\n# analyze the start time of the earliest file from the datasafed backend.\n# Then record the file name into dp_oldest_file.info.\n# If the oldest file is no changed, exit the process.\n# This can save traffic consumption.\nfunction DP_analyze_start_time_from_datasafed() {\n\tlocal oldest_file=\"${1:?missing oldest file}\"\n\tlocal get_start_time_from_file=\"${2:?missing get_start_time_from_file function}\"\n\tlocal datasafed_pull=\"${3:?missing datasafed_pull function}\"\n\tlocal info_file=\"${KB_BACKUP_WORKDIR}/dp_oldest_file.info\"\n\tmkdir -p ${KB_BACKUP_WORKDIR} && cd ${KB_BACKUP_WORKDIR}\n\tif [ -f ${info_file} ]; then\n\t\tlast_oldest_file=$(cat ${info_file})\n\t\tlast_oldest_file_name=$(DP_get_file_name_without_ext ${last_oldest_file})\n\t\tif [ \"$last_oldest_file\" == \"${oldest_file}\" ]; then\n\t\t\t# oldest file no changed.\n\t\t\t${get_start_time_from_file} $last_oldest_file_name\n\t\t\treturn\n\t\tfi\n\t\t# remove last oldest file\n\t\tif [ -f ${last_oldest_file_name} ]; then\n\t\t\trm -rf ${last_oldest_file_name}\n\t\tfi\n\tfi\n\t# pull file\n\t${datasafed_pull} ${oldest_file}\n\t# record last oldest file\n\techo ${oldest_file} >${info_file}\n\toldest_file_name=$(DP_get_file_name_without_ext ${oldest_file})\n\t${get_start_time_from_file} ${oldest_file_name}\n}\n\n# get the timeZone offset for location, such as Asia/Shanghai\nfunction getTimeZoneOffset() {\n\tlocal timeZone=${1:?missing time zone}\n\tif [[ $timeZone == \"+\"* ]] || [[ $timeZone == \"-\"* ]]; then\n\t\techo ${timeZone}\n\t\treturn\n\tfi\n\tlocal currTime=$(TZ=UTC date)\n\tlocal utcHour=$(TZ=UTC date -d \"${currTime}\" +\"%H\")\n\tlocal zoneHour=$(TZ=${timeZone} date -d \"${currTime}\" +\"%H\")\n\tlocal offset=$((${zoneHour} - ${utcHour}))\n\tif [ $offset -eq 0 ]; then\n\t\treturn\n\tfi\n\tsymbol=\"+\"\n\tif [ $offset -lt 0 ]; then\n\t\tsymbol=\"-\" && offset=${offset:1}\n\tfi\n\tif [ $offset -lt 10 ]; then\n\t\toffset=\"0${offset}\"\n\tfi\n\techo \"${symbol}${offset}:00\"\n}\n\n# if the script exits with a non-zero exit code, touch a file to indicate that the backup failed,\n# the sync progress container will check this file and exit if it exists\nfunction handle_exit() {\n\texit_code=$?\n\tif [ \"$exit_code\" -ne 0 ]; then\n\t\tDP_error_log \"Backup failed with exit code $exit_code\"\n\t\ttouch \"${DP_BACKUP_INFO_FILE}.exit\"\n\t\texit 1\n\tfi\n}\n\nfunction generate_backup_config() {\n\tclickhouse_backup_config=$(mktemp) || {\n\t\tDP_error_log \"Failed to create temporary file\"\n\t\treturn 1\n\t}\n\t# whole config see https://github.com/Altinity/clickhouse-backup\n\tcat >\"$clickhouse_backup_config\" <<'EOF'\ngeneral:\n remote_storage: s3 # REMOTE_STORAGE, choice from: `azblob`,`gcs`,`s3`, etc; if `none` then `upload` and `download` commands will fail.\n max_file_size: 1125899906842624 # MAX_FILE_SIZE, 1PB by default, useless when upload_by_part is true, use to split data parts files by archives\n backups_to_keep_local: 0 # BACKUPS_TO_KEEP_LOCAL, how many latest local backup should be kept, 0 means all created backups will be stored on local disk, -1 means backup will keep after `create` but will delete after `create_remote` command\n backups_to_keep_remote: 0 # BACKUPS_TO_KEEP_REMOTE, how many latest backup should be kept on remote storage, 0 means all uploaded backups will be stored on remote storage.\n log_level: info # LOG_LEVEL, a choice from `debug`, `info`, `warning`, `error`\n allow_empty_backups: true # ALLOW_EMPTY_BACKUPS\n \ download_concurrency: 1 # DOWNLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES / 2))\n upload_concurrency: 1 # UPLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES / 2))\n download_max_bytes_per_second: 0 # DOWNLOAD_MAX_BYTES_PER_SECOND, 0 means no throttling\n upload_max_bytes_per_second: 0 # UPLOAD_MAX_BYTES_PER_SECOND, 0 means no throttling\n object_disk_server_side_copy_concurrency: 32\n allow_object_disk_streaming: false\n # restore schema on cluster is alway run by `INIT_CLUSTER_NAME` cluster of clickhouse, when schema restore, the ddl only runs on first pod of first shard\n restore_schema_on_cluster: \"\" # RESTORE_SCHEMA_ON_CLUSTER, execute all schema related SQL queries with `ON CLUSTER` clause as Distributed DDL. This isn't applicable when `use_embedded_backup_restore: true`\n upload_by_part: true # UPLOAD_BY_PART\n download_by_part: true # DOWNLOAD_BY_PART\n use_resumable_state: true # USE_RESUMABLE_STATE, allow resume upload and download according to the .resumable file. Resumable state is not supported for custom method in remote storage.\n restore_database_mapping: {} # RESTORE_DATABASE_MAPPING, like \"src_db1:target_db1,src_db2:target_db2\", restore rules from backup databases to target databases, which is useful when changing destination database, all atomic tables will be created with new UUIDs.\n restore_table_mapping: {} # RESTORE_TABLE_MAPPING, like \"src_table1:target_table1,src_table2:target_table2\" restore rules from backup tables to target tables, which is useful when changing destination tables.\n retries_on_failure: 3 # RETRIES_ON_FAILURE, how many times to retry after a failure during upload or download\n retries_pause: 5s # RETRIES_PAUSE, duration time to pause after each download or upload failure\n \ watch_interval: 1h # WATCH_INTERVAL, use only for `watch` command, backup will create every 1h\n full_interval: 24h # FULL_INTERVAL, use only for `watch` command, full backup will create every 24h\n watch_backup_name_template: \"shard{shard}-{type}-{time:20060102150405}\" # WATCH_BACKUP_NAME_TEMPLATE, used only for `watch` command, macros values will apply from `system.macros` for time:XXX, look format in https://go.dev/src/time/format.go\n \ sharded_operation_mode: none # SHARDED_OPERATION_MODE, how different replicas will shard backing up data for tables. Options are: none (no sharding), table (table granularity), database (database granularity), first-replica (on the lexicographically sorted first active replica). If left empty, then the \"none\" option will be set as default.\n cpu_nice_priority: 15 # CPU niceness priority, to allow throttling CPU intensive operation, more details https://manpages.ubuntu.com/manpages/xenial/man1/nice.1.html\n \ io_nice_priority: \"idle\" # IO niceness priority, to allow throttling DISK intensive operation, more details https://manpages.ubuntu.com/manpages/xenial/man1/ionice.1.html\n \ rbac_backup_always: true # always, backup RBAC objects\n rbac_resolve_conflicts: \"recreate\" # action, when RBAC object with the same name already exists, allow \"recreate\", \"ignore\", \"fail\" values\nclickhouse:\n username: default # CLICKHOUSE_USERNAME\n password: \"\" # CLICKHOUSE_PASSWORD\n host: localhost # CLICKHOUSE_HOST, To make backup data `clickhouse-backup` requires access to the same file system as clickhouse-server, so `host` should localhost or address of another docker container on the same machine, or IP address bound to some network interface on the same host.\n port: 9000 # CLICKHOUSE_PORT, don't use 8123, clickhouse-backup doesn't support HTTP protocol\n disk_mapping: {} # CLICKHOUSE_DISK_MAPPING, use this mapping when your `system.disks` are different between the source and destination clusters during backup and restore process. The format for this env variable is \"disk_name1:disk_path1,disk_name2:disk_path2\". For YAML please continue using map syntax. If destination disk is different from source backup disk then you need to specify the destination disk in the config file: disk_mapping: disk_destination: /var/lib/clickhouse/disks/destination `disk_destination` needs to be referenced in backup (source config), and all names from this map (`disk:path`) shall exist in `system.disks` on destination server. During download of the backup from remote location (s3), if `name` is not present in `disk_mapping` (on the destination server config too) then `default` disk path will used for download. `disk_mapping` is used to understand during download where downloaded parts shall be unpacked (which disk) on destination server and where to search for data parts directories during restore.\n skip_tables: # CLICKHOUSE_SKIP_TABLES, the list of tables (pattern are allowed) which are ignored during backup and restore process The format for this env variable is \"pattern1,pattern2,pattern3\". For YAML please continue using list syntax\n \ - system.*\n - INFORMATION_SCHEMA.*\n - information_schema.*\n skip_table_engines: [] # CLICKHOUSE_SKIP_TABLE_ENGINES, the list of tables engines which are ignored during backup, upload, download, restore process The format for this env variable is \"Engine1,Engine2,engine3\". For YAML please continue using list syntax\n \ skip_disks: [] # CLICKHOUSE_SKIP_DISKS, list of disk names which are ignored during create, upload, download and restore command The format for this env variable is \"Engine1,Engine2,engine3\". For YAML please continue using list syntax\n skip_disk_types: [] # CLICKHOUSE_SKIP_DISK_TYPES, list of disk types which are ignored during create, upload, download and restore command The format for this env variable is \"Engine1,Engine2,engine3\". For YAML please continue using list syntax\n timeout: 5m # CLICKHOUSE_TIMEOUT\n freeze_by_part: false # CLICKHOUSE_FREEZE_BY_PART, allow freezing by part instead of freezing the whole table\n freeze_by_part_where: \"\" # CLICKHOUSE_FREEZE_BY_PART_WHERE, allow parts filtering during freezing when freeze_by_part: true\n secure: false # CLICKHOUSE_SECURE, use TLS encryption for connection\n skip_verify: false # CLICKHOUSE_SKIP_VERIFY, skip certificate verification and allow potential certificate warnings\n sync_replicated_tables: true # CLICKHOUSE_SYNC_REPLICATED_TABLES\n \ tls_key: \"\" # CLICKHOUSE_TLS_KEY, filename with TLS key file\n tls_cert: \"\" # CLICKHOUSE_TLS_CERT, filename with TLS certificate file\n tls_ca: \"\" # CLICKHOUSE_TLS_CA, filename with TLS custom authority file\n log_sql_queries: true # CLICKHOUSE_LOG_SQL_QUERIES, logging `clickhouse-backup` SQL queries on `info` level, when true, `debug` level when false\n debug: false # CLICKHOUSE_DEBUG\n \ config_dir: \"/opt/bitnami/clickhouse/etc\" # CLICKHOUSE_CONFIG_DIR\n restart_command: \"sql:SYSTEM SHUTDOWN\" # CLICKHOUSE_RESTART_COMMAND, use this command when restoring with --rbac, --rbac-only or --configs, --configs-only options will split command by ; and execute one by one, all errors will logged and ignore available prefixes - sql: will execute SQL query - exec: will execute command via shell\n ignore_not_exists_error_during_freeze: true # CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE, helps to avoid backup failures when running frequent CREATE / DROP tables and databases during backup, `clickhouse-backup` will ignore `code: 60` and `code: 81` errors during execution of `ALTER TABLE ... FREEZE`\n check_replicas_before_attach: true # CLICKHOUSE_CHECK_REPLICAS_BEFORE_ATTACH, helps avoiding concurrent ATTACH PART execution when restoring ReplicatedMergeTree tables\n default_replica_path: \"/clickhouse/tables/{layer}/{shard}/{database}/{table}\" # CLICKHOUSE_DEFAULT_REPLICA_PATH, will use during restore Replicated tables without macros in replication_path if replica already exists, to avoid restoring conflicts\n default_replica_name: \"{replica}\" # CLICKHOUSE_DEFAULT_REPLICA_NAME, will use during restore Replicated tables without macros in replica_name if replica already exists, to avoid restoring conflicts\n use_embedded_backup_restore: false # CLICKHOUSE_USE_EMBEDDED_BACKUP_RESTORE, use BACKUP / RESTORE SQL statements instead of regular SQL queries to use features of modern ClickHouse server versions\n embedded_backup_disk: \"\" # CLICKHOUSE_EMBEDDED_BACKUP_DISK - disk from system.disks which will use when `use_embedded_backup_restore: true`\n \ backup_mutations: true # CLICKHOUSE_BACKUP_MUTATIONS, allow backup mutations from system.mutations WHERE is_done=0 and apply it during restore\n restore_as_attach: false # CLICKHOUSE_RESTORE_AS_ATTACH, allow restore tables which have inconsistent data parts structure and mutations in progress\n check_parts_columns: true # CLICKHOUSE_CHECK_PARTS_COLUMNS, check data types from system.parts_columns during create backup to guarantee mutation is complete\n max_connections: 0 # CLICKHOUSE_MAX_CONNECTIONS, how many parallel connections could be opened during operations\ns3:\n access_key: \"\" # S3_ACCESS_KEY\n secret_key: \"\" # S3_SECRET_KEY\n bucket: \"\" # S3_BUCKET\n endpoint: \"\" # S3_ENDPOINT\n \ region: us-east-1 # S3_REGION\n acl: private # S3_ACL, AWS changed S3 defaults in April 2023 so that all new buckets have ACL disabled: https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/ They also recommend that ACLs are disabled: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ensure-object-ownership.html use `acl: \"\"` if you see \"api error AccessControlListNotSupported: The bucket does not allow ACLs\"\n assume_role_arn: \"\" # S3_ASSUME_ROLE_ARN\n force_path_style: false # S3_FORCE_PATH_STYLE\n path: \"\" # S3_PATH, `system.macros` values can be applied as {macro_name}\n object_disk_path: \"\" # S3_OBJECT_DISK_PATH, path for backup of part from clickhouse object disks, if object disks present in clickhouse, then shall not be zero and shall not be prefixed by `path`\n \ disable_ssl: false # S3_DISABLE_SSL\n compression_level: 1 # S3_COMPRESSION_LEVEL\n \ compression_format: tar # S3_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is look at details in https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html\n \ sse: \"\" # S3_SSE, empty (default), AES256, or aws:kms\n sse_customer_algorithm: \"\" # S3_SSE_CUSTOMER_ALGORITHM, encryption algorithm, for example, AES256\n \ sse_customer_key: \"\" # S3_SSE_CUSTOMER_KEY, customer-provided encryption key use `openssl rand 32 > aws_sse.key` and `cat aws_sse.key | base64`\n sse_customer_key_md5: \"\" # S3_SSE_CUSTOMER_KEY_MD5, 128-bit MD5 digest of the encryption key according to RFC 1321 use `cat aws_sse.key | openssl dgst -md5 -binary | base64`\n sse_kms_key_id: \"\" # S3_SSE_KMS_KEY_ID, if S3_SSE is aws:kms then specifies the ID of the Amazon Web Services Key Management Service\n sse_kms_encryption_context: \"\" # S3_SSE_KMS_ENCRYPTION_CONTEXT, base64-encoded UTF-8 string holding a JSON with the encryption context Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. This is a collection of non-secret key-value pairs that represent additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported only on operations with symmetric encryption KMS keys\n disable_cert_verification: false # S3_DISABLE_CERT_VERIFICATION\n use_custom_storage_class: false # S3_USE_CUSTOM_STORAGE_CLASS\n \ storage_class: STANDARD # S3_STORAGE_CLASS, by default allow only from list https://github.com/aws/aws-sdk-go-v2/blob/main/service/s3/types/enums.go#L787-L799\n \ concurrency: 1 # S3_CONCURRENCY\n max_parts_count: 4000 # S3_MAX_PARTS_COUNT, number of parts for S3 multipart uploads\n allow_multipart_download: false # S3_ALLOW_MULTIPART_DOWNLOAD, allow faster multipart download speed, but will require additional disk space, download_concurrency * part size in worst case\n \ checksum_algorithm: \"\" # S3_CHECKSUM_ALGORITHM, use it when you use object lock which allow to avoid delete keys from bucket until some timeout after creation, use CRC32 as fastest\n object_labels: {} # S3_OBJECT_LABELS, allow setup metadata for each object during upload, use {macro_name} from system.macros and {backupName} for current backup name The format for this env variable is \"key1:value1,key2:value2\". For YAML please continue using map syntax\n custom_storage_class_map: {} # S3_CUSTOM_STORAGE_CLASS_MAP, allow setup storage class depending on the backup name regexp pattern, format nameRegexp > className\n request_payer: \"\" # S3_REQUEST_PAYER, define who will pay to request, look https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html for details, possible values requester, if empty then bucket owner\n debug: false # S3_DEBUG\nEOF\n\texport CLICKHOUSE_BACKUP_CONFIG=\"$clickhouse_backup_config\"\n}\n\nfunction getToolConfigValue() {\n\tlocal var=$1\n\tcat \"$toolConfig\" | grep \"$var\" | awk '{print $NF}'\n}\n\nfunction set_clickhouse_backup_config_env() {\n\ttoolConfig=/etc/datasafed/datasafed.conf\n\tif [ ! -f ${toolConfig} ]; then\n\t\tDP_error_log \"Config file not found: ${toolConfig}\"\n\t\texit 1\n\tfi\n\n\tlocal provider=\"\"\n\tlocal access_key_id=\"\"\n\tlocal secret_access_key=\"\"\n\tlocal region=\"\"\n\tlocal endpoint=\"\"\n\tlocal bucket=\"\"\n\n\tIFS=$'\\n'\n\tfor line in $(cat ${toolConfig}); do\n\t\tline=$(eval echo $line)\n\t\tif [[ $line == \"access_key_id\"* ]]; then\n\t\t\taccess_key_id=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"secret_access_key\"* ]]; then\n\t\t\tsecret_access_key=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"region\"* ]]; then\n\t\t\tregion=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"endpoint\"* ]]; then\n\t\t\tendpoint=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"root\"* ]]; then\n\t\t\tbucket=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"chunk_size\"* ]]; then\n\t\t\tchunk_size=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"provider\"* ]]; then\n\t\t\tprovider=$(getToolConfigValue \"$line\")\n\t\tfi\n\tdone\n\n\tif [[ ! $endpoint =~ ^https?:// ]]; then\n\t\tendpoint=\"https://${endpoint}\"\n\tfi\n\n\tif [[ \"$provider\" == \"Alibaba\" ]]; then\n\t\tregex='https?:\\/\\/oss-(.*?)\\.aliyuncs\\.com'\n\t\tif [[ \"$endpoint\" =~ $regex ]]; then\n\t\t\tregion=\"${BASH_REMATCH[1]}\"\n\t\t\tDP_log \"Extract region from $endpoint-> $region\"\n\t\telse\n\t\t\tDP_log \"Failed to extract region from endpoint: $endpoint\"\n\t\tfi\n\telif [[ \"$provider\" == \"TencentCOS\" ]]; then\n\t\tregex='https?:\\/\\/cos\\.(.*?)\\.myqcloud\\.com'\n\t\tif [[ \"$endpoint\" =~ $regex ]]; then\n\t\t\tregion=\"${BASH_REMATCH[1]}\"\n\t\t\tDP_log \"Extract region from $endpoint-> $region\"\n\t\telse\n\t\t\tDP_log \"Failed to extract region from endpoint: $endpoint\"\n\t\tfi\n\telif [[ \"$provider\" == \"Minio\" || \"$provider\" == \"RustFS\" ]]; then\n\t\texport S3_FORCE_PATH_STYLE=true\n\telse\n\t\techo \"Unsupported provider: $provider\"\n\tfi\n\n\texport S3_ACCESS_KEY=\"${access_key_id}\"\n\texport S3_SECRET_KEY=\"${secret_access_key}\"\n\texport S3_REGION=\"${region}\"\n\texport S3_ENDPOINT=\"${endpoint}\"\n\texport S3_BUCKET=\"${bucket}\"\n\texport S3_PART_SIZE=\"${chunk_size}\"\n\texport S3_PATH=\"${DP_BACKUP_BASE_PATH}\"\n\texport INIT_CLUSTER_NAME=\"${INIT_CLUSTER_NAME:-default}\"\n\texport RESTORE_SCHEMA_ON_CLUSTER=\"${INIT_CLUSTER_NAME}\"\n\texport CLICKHOUSE_HOST=\"${DP_DB_HOST}\"\n\texport CLICKHOUSE_USERNAME=\"${CLICKHOUSE_ADMIN_USER}\"\n\texport CLICKHOUSE_PASSWORD=\"${CLICKHOUSE_ADMIN_PASSWORD}\"\n\tif [[ \"${TLS_ENABLED:-false}\" == \"true\" ]]; then\n\t\texport CLICKHOUSE_SECURE=true\n\t\texport CLICKHOUSE_PORT=\"${CLICKHOUSE_TCP_SECURE_PORT:-9440}\"\n\t\texport CLICKHOUSE_TLS_CA=\"/etc/pki/tls/ca.pem\"\n\t\texport CLICKHOUSE_TLS_CERT=\"/etc/pki/tls/cert.pem\"\n\t\texport CLICKHOUSE_TLS_KEY=\"/etc/pki/tls/key.pem\"\n\t\texport CLICKHOUSE_SKIP_VERIFY=true\n\tfi\n\tDP_log \"Dynamic environment variables for clickhouse-backup have been set.\"\n}\n\nfunction ch_query() {\n\tlocal query=\"$1\"\n\tlocal ch_port=\"${CLICKHOUSE_PORT:-9000}\"\n\tlocal ch_args=(--user \"${CLICKHOUSE_USERNAME}\" --password \"${CLICKHOUSE_PASSWORD}\" --host \"${CLICKHOUSE_HOST}\" --port \"$ch_port\" --connect_timeout=5)\n\tclickhouse-client \"${ch_args[@]}\" --query \"$query\"\n}\n\nfunction download_backup() {\n\tlocal backup_name=\"$1\"\n\tclickhouse-backup download \"$backup_name\" || {\n\t\tDP_error_log \"Failed to download backup '$backup_name'\"\n\t\treturn 1\n\t}\n\tDP_log \"Downloading backup '$backup_name' from remote storage...\"\n\treturn 0\n}\n\nfunction fetch_backup() {\n\tlocal backup_name=$1\n\tif clickhouse-backup list local | grep -q \"$backup_name\"; then\n\t\tDP_log \"Local backup '$backup_name' found.\"\n\telse\n\t\tDP_log \"Local backup '$backup_name' not found. Downloading...\"\n\t\tdownload_backup \"$backup_name\" || {\n\t\t\tDP_error_log \"Failed to download backup '$backup_name'. Exiting.\"\n\t\t\texit 1\n\t\t}\n\t\tclickhouse-backup list local | grep -q \"$backup_name\" || {\n\t\t\tDP_error_log \"Backup '$backup_name' not found after download. Exiting.\"\n\t\t\texit 1\n\t\t}\n\tfi\n\tDP_log \"Backup '$backup_name' is available locally.\"\n}\n\nfunction delete_backups_except() {\n\tlocal latest_backup=$1\n\tDP_log \"delete backup except $latest_backup\"\n\tbackup_list=$(clickhouse-backup list)\n\techo \"$backup_list\" | awk '/local/ {print $1}' | while IFS= read -r backup_name; do\n\t\tif [ \"$backup_name\" != \"$latest_backup\" ]; then\n\t\t\tclickhouse-backup delete local \"$backup_name\" || {\n\t\t\t\tDP_error_log \"Clickhouse-backup delete local backup $backup_name FAILED\"\n\t\t\t}\n\t\tfi\n\tdone\n}\n\n# Save backup size info for DP status reporting\nfunction save_backup_size() {\n\tlocal shard_base_dir\n\tshard_base_dir=$(dirname \"${DP_BACKUP_BASE_PATH}\")\n\texport DATASAFED_BACKEND_BASE_PATH=\"$shard_base_dir\"\n\texport PATH=\"$PATH:$DP_DATASAFED_BIN_PATH\"\n\tlocal backup_size\n\tbackup_size=$(datasafed stat / | grep TotalSize | awk '{print $2}')\n\tDP_save_backup_status_info \"$backup_size\"\n}\n\n# Restore schema and wait for sync across shards\nfunction restore_schema_and_sync() {\n\tlocal backup_name=\"$1\"\n\tlocal mode_info=\"$2\"\n\tlocal schema_db=\"kubeblocks\"\n\tlocal schema_table=\"__restore_ready__\"\n\tlocal timeout=\"${RESTORE_SCHEMA_READY_TIMEOUT_SECONDS:-1800}\"\n\tlocal interval=\"${RESTORE_SCHEMA_READY_CHECK_INTERVAL_SECONDS:-5}\"\n\n\t# Determine if this pod should execute schema restore\n\tlocal should_restore_schema=false\n\tif [[ \"$mode_info\" == \"standalone\" ]]; then\n\t\tshould_restore_schema=true\n\telse\n\t\tlocal first_component=\"${mode_info#cluster:}\"\n\t\t[[ \"${CURRENT_SHARD_COMPONENT_SHORT_NAME}\" == \"$first_component\" ]] && should_restore_schema=true\n\tfi\n\n\tif [[ \"$should_restore_schema\" == \"true\" ]]; then\n\t\t# Standalone: unset ON CLUSTER mode to avoid ZK requirement\n\t\t[[ \"$mode_info\" == \"standalone\" ]] && unset RESTORE_SCHEMA_ON_CLUSTER\n\t\tclickhouse-backup restore_remote \"$backup_name\" --schema --rbac || {\n\t\t\tDP_error_log \"Clickhouse-backup restore_remote backup $backup_name FAILED\"\n\t\t\treturn 1\n\t\t}\n\t\t# Cluster mode: create marker table for cross-shard coordination\n\t\tif [[ \"$mode_info\" != \"standalone\" ]]; then\n\t\t\tch_query \"CREATE DATABASE IF NOT EXISTS \\`${schema_db}\\` ON CLUSTER \\`${INIT_CLUSTER_NAME}\\`\" || {\n\t\t\t\tDP_error_log \"Failed to create database ${schema_db}\"\n\t\t\t\treturn 1\n\t\t\t}\n\t\t\tch_query \"CREATE TABLE IF NOT EXISTS \\`${schema_db}\\`.\\`${schema_table}\\` ON CLUSTER \\`${INIT_CLUSTER_NAME}\\` (shard String, finished_at DateTime, backup_name String) ENGINE=TinyLog\" || {\n\t\t\t\tDP_error_log \"Failed to create schema ready marker\"\n\t\t\t\treturn 1\n\t\t\t}\n\t\tfi\n\telse\n\t\tDP_log \"Waiting for schema ready table on ${CLICKHOUSE_HOST}...\"\n\t\tlocal start=$(date +%s)\n\t\twhile true; do\n\t\t\tif [[ \"$(ch_query \"EXISTS TABLE \\`${schema_db}\\`.\\`${schema_table}\\`\")\" == \"1\" ]]; then\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\tlocal now=$(date +%s)\n\t\t\tif [[ $((now - start)) -ge $timeout ]]; then\n\t\t\t\tDP_error_log \"Timeout waiting for schema ready table on ${CLICKHOUSE_HOST}\"\n\t\t\t\treturn 1\n\t\t\tfi\n\t\t\tsleep \"$interval\"\n\t\tdone\n\tfi\n}\n\n# Full restore: schema + data + marker\nfunction do_restore() {\n\tlocal backup_name=\"$1\"\n\tlocal mode_info=\"$2\"\n\tlocal schema_db=\"kubeblocks\"\n\tlocal schema_table=\"__restore_ready__\"\n\n\t# Restore schema (first shard in cluster uses ON CLUSTER DDL)\n\trestore_schema_and_sync \"$backup_name\" \"$mode_info\" || return 1\n\n\t# Restore data\n\tclickhouse-backup restore_remote \"$backup_name\" --data || {\n\t\tDP_error_log \"Clickhouse-backup restore_remote --data FAILED\"\n\t\treturn 1\n\t}\n\n\t# Insert shard ready marker (cluster mode only)\n\tif [[ \"$mode_info\" != \"standalone\" ]]; then\n\t\tch_query \"INSERT INTO \\`${schema_db}\\`.\\`${schema_table}\\` (shard, finished_at, backup_name) VALUES ('${CURRENT_SHARD_COMPONENT_SHORT_NAME}', now(), '$backup_name')\" || {\n\t\t\tDP_error_log \"Failed to insert shard ready marker\"\n\t\t\treturn 1\n\t\t}\n\tfi\n}\n\n#!/bin/bash\nset -exo pipefail\n\n# Supports: standalone (single node) and cluster (multi-shard) topologies\n# Strategy: first shard restores schema with ON CLUSTER, others wait for sync\n\ntrap handle_exit EXIT\ngenerate_backup_config\nset_clickhouse_backup_config_env\n\nif [[ \"${CLICKHOUSE_SECURE}\" = \"true\" ]]; then\n\tDP_error_log \"ClickHouse restore does not support TLS\"\n\texit 1\nfi\n\n# 1. Detect topology mode: standalone (no ':' in FQDN) or cluster\nfirst_entry=\"${ALL_COMBINED_SHARDS_POD_FQDN_LIST%%,*}\"\nfirst_component=\"${first_entry%%:*}\"\nif [[ -z \"$first_component\" ]]; then\n\tDP_error_log \"Invalid ALL_COMBINED_SHARDS_POD_FQDN_LIST\"\n\texit 1\nfi\nif [[ \"$first_component\" == \"$first_entry\" ]]; then\n\tmode_info=\"standalone\"\n\tDP_log \"Standalone mode detected\"\nelse\n\tmode_info=\"cluster:$first_component\"\nfi\n\n# 2. Restore schema + data + marker\ndo_restore \"${DP_BACKUP_NAME}\" \"$mode_info\" || exit 1\n\n# 3. Cleanup local backups\ndelete_backups_except \"\"\n" env: - name: DP_BACKUP_NAME value: backup-ns-uwpgk-clkhouse-icopne-20260211191142 - name: DP_TARGET_RELATIVE_PATH value: clickhouse-7gx - name: DP_BACKUP_ROOT_PATH value: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse - name: DP_BACKUP_BASE_PATH value: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142/clickhouse-7gx - name: DP_BACKUP_STOP_TIME value: "2026-02-11T11:12:05Z" - name: RESTORE_SCHEMA_READY_TIMEOUT_SECONDS value: "1800" - name: RESTORE_SCHEMA_READY_CHECK_INTERVAL_SECONDS value: "5" - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-icopne-backup-clickhouse-ngn-account-admin - name: CURRENT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: INIT_CLUSTER_NAME value: default - name: DP_DB_USER valueFrom: secretKeyRef: key: username name: clkhouse-icopne-backup-clickhouse-ngn-account-admin - name: DP_DB_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-icopne-backup-clickhouse-ngn-account-admin - name: DP_DB_PORT value: "8001" - name: DP_DB_HOST value: clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless - name: DP_DATASAFED_BIN_PATH value: /bin/datasafed envFrom: - configMapRef: name: clkhouse-icopne-backup-clickhouse-ngn-env optional: false image: docker.io/apecloud/clickhouse-backup-full:2.6.42 imagePullPolicy: IfNotPresent name: restore resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /etc/clickhouse-client name: client-config - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-42mtd readOnly: true - args: - |2 set -o errexit set -o nounset sleep_seconds="1" signal_file="/dp_downward/stop_restore_manager" if [ "$sleep_seconds" -le 0 ]; then sleep_seconds=2 fi while true; do if [ -f "$signal_file" ] && [ "$(cat "$signal_file")" = "true" ]; then break fi echo "waiting for other restore workloads, sleep ${sleep_seconds}s" sleep "$sleep_seconds" done echo "restore manager stopped" command: - sh - -c env: - name: DP_DATASAFED_BIN_PATH value: /bin/datasafed image: docker.io/apecloud/kubeblocks-tools:1.0.2 imagePullPolicy: IfNotPresent name: restore-manager resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /dp_downward name: downward-volume-sidecard - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-42mtd readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: kbcli-test-registry-key initContainers: - command: - /bin/sh - -c - /scripts/install-datasafed.sh /bin/datasafed image: docker.io/apecloud/datasafed:0.2.3 imagePullPolicy: IfNotPresent name: dp-copy-datasafed resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: allowPrivilegeEscalation: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-42mtd readOnly: true nodeName: aks-cicdamdpool-55976491-vmss000001 nodeSelector: kubernetes.io/hostname: aks-cicdamdpool-55976491-vmss000001 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: runAsUser: 0 serviceAccount: kubeblocks-dataprotection-worker serviceAccountName: kubeblocks-dataprotection-worker terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: data persistentVolumeClaim: claimName: data-clkhouse-icopne-backup-clickhouse-ngn-0 - configMap: defaultMode: 292 name: clkhouse-icopne-backup-clickhouse-ngn-clickhouse-client-tpl name: client-config - configMap: defaultMode: 292 name: clkhouse-icopne-backup-clickhouse-ngn-clickhouse-tpl name: config - downwardAPI: defaultMode: 420 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['dataprotection.kubeblocks.io/stop-restore-manager'] path: stop_restore_manager name: downward-volume-sidecard - name: dp-datasafed-config secret: defaultMode: 420 secretName: tool-config-backuprepo-kbcli-test-88dtkr - emptyDir: {} name: dp-datasafed-bin - name: kube-api-access-42mtd projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-02-11T11:15:15Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-02-11T11:13:52Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-02-11T11:15:11Z" reason: PodFailed status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2026-02-11T11:15:11Z" reason: PodFailed status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-02-11T11:13:51Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://11d5f9dafb7f3dd5542af53b79c4d467cbab2d1425f3b12bd864d7b8821b3a29 image: docker.io/apecloud/clickhouse-backup-full:2.6.42 imageID: docker.io/apecloud/clickhouse-backup-full@sha256:0dedf050bf78f889c2d6ed7120aae4df927c7816a72863ac017aba49c072af4e lastState: {} name: restore ready: false restartCount: 0 started: false state: terminated: containerID: containerd://11d5f9dafb7f3dd5542af53b79c4d467cbab2d1425f3b12bd864d7b8821b3a29 exitCode: 1 finishedAt: "2026-02-11T11:15:11Z" reason: Error startedAt: "2026-02-11T11:14:04Z" volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /etc/clickhouse-client name: client-config - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true recursiveReadOnly: Disabled - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-42mtd readOnly: true recursiveReadOnly: Disabled - containerID: containerd://55c830348418ba810971214efad20c6a10a6d1e0f50a20597278d0b3167ddc3e image: docker.io/apecloud/kubeblocks-tools:1.0.2 imageID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea lastState: {} name: restore-manager ready: false restartCount: 0 started: false state: terminated: containerID: containerd://55c830348418ba810971214efad20c6a10a6d1e0f50a20597278d0b3167ddc3e exitCode: 0 finishedAt: "2026-02-11T11:15:13Z" reason: Completed startedAt: "2026-02-11T11:14:05Z" volumeMounts: - mountPath: /dp_downward name: downward-volume-sidecard - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true recursiveReadOnly: Disabled - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-42mtd readOnly: true recursiveReadOnly: Disabled hostIP: 10.224.0.7 hostIPs: - ip: 10.224.0.7 initContainerStatuses: - containerID: containerd://a8bc69d15939d90a4e4fce91811eb07094da4d46e000e7aad8b524716ddc67dd image: docker.io/apecloud/datasafed:0.2.3 imageID: docker.io/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f lastState: {} name: dp-copy-datasafed ready: true restartCount: 0 started: false state: terminated: containerID: containerd://a8bc69d15939d90a4e4fce91811eb07094da4d46e000e7aad8b524716ddc67dd exitCode: 0 finishedAt: "2026-02-11T11:13:51Z" reason: Completed startedAt: "2026-02-11T11:13:51Z" volumeMounts: - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-42mtd readOnly: true recursiveReadOnly: Disabled phase: Failed podIP: 10.244.4.222 podIPs: - ip: 10.244.4.222 qosClass: BestEffort startTime: "2026-02-11T11:13:51Z" ------------------------------------------------------------------------------------------------------------------  `kubectl get pod restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 -o yaml --namespace ns-uwpgk `(B  apiVersion: v1 kind: Pod metadata: annotations: dataprotection.kubeblocks.io/stop-restore-manager: "true" creationTimestamp: "2026-02-11T11:15:23Z" generateName: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0- labels: app.kubernetes.io/managed-by: kubeblocks-dataprotection batch.kubernetes.io/controller-uid: b0539470-d058-40af-94c5-e989ce4bf597 batch.kubernetes.io/job-name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 controller-uid: b0539470-d058-40af-94c5-e989ce4bf597 dataprotection.kubeblocks.io/restore: clkhouse-icopne-backup-clickhouse-ngn-5362cf29-postready job-name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 namespace: ns-uwpgk ownerReferences: - apiVersion: batch/v1 blockOwnerDeletion: true controller: true kind: Job name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 uid: b0539470-d058-40af-94c5-e989ce4bf597 resourceVersion: "120629" uid: a189b163-28dd-47ff-b24a-b4092b4c6c4b spec: containers: - command: - bash - -c - "#!/bin/bash\n# log info file\nfunction DP_log() {\n\tmsg=$1\n\tlocal curr_date=$(date -u '+%Y-%m-%d %H:%M:%S')\n\techo \"${curr_date} INFO: $msg\"\n}\n\n# log error info\nfunction DP_error_log() {\n\tmsg=$1\n\tlocal curr_date=$(date -u '+%Y-%m-%d %H:%M:%S')\n\techo \"${curr_date} ERROR: $msg\"\n}\n\n# Get file names without extensions based on the incoming file path\nfunction DP_get_file_name_without_ext() {\n\tlocal fileName=$1\n\tlocal file_without_ext=${fileName%.*}\n\techo $(basename ${file_without_ext})\n}\n\n# Save backup status info file for syncing progress.\n# timeFormat: %Y-%m-%dT%H:%M:%SZ\nfunction DP_save_backup_status_info() {\n\tlocal totalSize=$1\n\tlocal startTime=$2\n\tlocal stopTime=$3\n\tlocal timeZone=$4\n\tlocal extras=$5\n\tlocal timeZoneStr=\"\"\n\tif [ ! -z ${timeZone} ]; then\n\t\ttimeZoneStr=\",\\\"timeZone\\\":\\\"${timeZone}\\\"\"\n\tfi\n\tif [ -z \"${stopTime}\" ]; then\n\t\techo \"{\\\"totalSize\\\":\\\"${totalSize}\\\"}\" >${DP_BACKUP_INFO_FILE}\n\telif [ -z \"${startTime}\" ]; then\n\t\techo \"{\\\"totalSize\\\":\\\"${totalSize}\\\",\\\"extras\\\":[${extras}],\\\"timeRange\\\":{\\\"end\\\":\\\"${stopTime}\\\"${timeZoneStr}}}\" >${DP_BACKUP_INFO_FILE}\n\telse\n\t\techo \"{\\\"totalSize\\\":\\\"${totalSize}\\\",\\\"extras\\\":[${extras}],\\\"timeRange\\\":{\\\"start\\\":\\\"${startTime}\\\",\\\"end\\\":\\\"${stopTime}\\\"${timeZoneStr}}}\" >${DP_BACKUP_INFO_FILE}\n\tfi\n}\n\n# Clean up expired logfiles.\n# Default interval is 60s\n# Default rootPath is /\nfunction DP_purge_expired_files() {\n\tlocal currentUnix=\"${1:?missing current unix}\"\n\tlocal last_purge_time=\"${2:?missing last_purge_time}\"\n\tlocal root_path=${3:-\"/\"}\n\tlocal interval_seconds=${4:-60}\n\tlocal diff_time=$((${currentUnix} - ${last_purge_time}))\n\tif [[ -z ${DP_TTL_SECONDS} || ${diff_time} -lt ${interval_seconds} ]]; then\n\t\treturn\n\tfi\n\texpiredUnix=$((${currentUnix} - ${DP_TTL_SECONDS}))\n\tfiles=$(datasafed list -f --recursive --older-than ${expiredUnix} ${root_path})\n\tfor file in \"${files[@]}\"; do\n\t\tdatasafed rm \"$file\"\n\t\techo \"$file\"\n\tdone\n}\n\n# analyze the start time of the earliest file from the datasafed backend.\n# Then record the file name into dp_oldest_file.info.\n# If the oldest file is no changed, exit the process.\n# This can save traffic consumption.\nfunction DP_analyze_start_time_from_datasafed() {\n\tlocal oldest_file=\"${1:?missing oldest file}\"\n\tlocal get_start_time_from_file=\"${2:?missing get_start_time_from_file function}\"\n\tlocal datasafed_pull=\"${3:?missing datasafed_pull function}\"\n\tlocal info_file=\"${KB_BACKUP_WORKDIR}/dp_oldest_file.info\"\n\tmkdir -p ${KB_BACKUP_WORKDIR} && cd ${KB_BACKUP_WORKDIR}\n\tif [ -f ${info_file} ]; then\n\t\tlast_oldest_file=$(cat ${info_file})\n\t\tlast_oldest_file_name=$(DP_get_file_name_without_ext ${last_oldest_file})\n\t\tif [ \"$last_oldest_file\" == \"${oldest_file}\" ]; then\n\t\t\t# oldest file no changed.\n\t\t\t${get_start_time_from_file} $last_oldest_file_name\n\t\t\treturn\n\t\tfi\n\t\t# remove last oldest file\n\t\tif [ -f ${last_oldest_file_name} ]; then\n\t\t\trm -rf ${last_oldest_file_name}\n\t\tfi\n\tfi\n\t# pull file\n\t${datasafed_pull} ${oldest_file}\n\t# record last oldest file\n\techo ${oldest_file} >${info_file}\n\toldest_file_name=$(DP_get_file_name_without_ext ${oldest_file})\n\t${get_start_time_from_file} ${oldest_file_name}\n}\n\n# get the timeZone offset for location, such as Asia/Shanghai\nfunction getTimeZoneOffset() {\n\tlocal timeZone=${1:?missing time zone}\n\tif [[ $timeZone == \"+\"* ]] || [[ $timeZone == \"-\"* ]]; then\n\t\techo ${timeZone}\n\t\treturn\n\tfi\n\tlocal currTime=$(TZ=UTC date)\n\tlocal utcHour=$(TZ=UTC date -d \"${currTime}\" +\"%H\")\n\tlocal zoneHour=$(TZ=${timeZone} date -d \"${currTime}\" +\"%H\")\n\tlocal offset=$((${zoneHour} - ${utcHour}))\n\tif [ $offset -eq 0 ]; then\n\t\treturn\n\tfi\n\tsymbol=\"+\"\n\tif [ $offset -lt 0 ]; then\n\t\tsymbol=\"-\" && offset=${offset:1}\n\tfi\n\tif [ $offset -lt 10 ]; then\n\t\toffset=\"0${offset}\"\n\tfi\n\techo \"${symbol}${offset}:00\"\n}\n\n# if the script exits with a non-zero exit code, touch a file to indicate that the backup failed,\n# the sync progress container will check this file and exit if it exists\nfunction handle_exit() {\n\texit_code=$?\n\tif [ \"$exit_code\" -ne 0 ]; then\n\t\tDP_error_log \"Backup failed with exit code $exit_code\"\n\t\ttouch \"${DP_BACKUP_INFO_FILE}.exit\"\n\t\texit 1\n\tfi\n}\n\nfunction generate_backup_config() {\n\tclickhouse_backup_config=$(mktemp) || {\n\t\tDP_error_log \"Failed to create temporary file\"\n\t\treturn 1\n\t}\n\t# whole config see https://github.com/Altinity/clickhouse-backup\n\tcat >\"$clickhouse_backup_config\" <<'EOF'\ngeneral:\n remote_storage: s3 # REMOTE_STORAGE, choice from: `azblob`,`gcs`,`s3`, etc; if `none` then `upload` and `download` commands will fail.\n max_file_size: 1125899906842624 # MAX_FILE_SIZE, 1PB by default, useless when upload_by_part is true, use to split data parts files by archives\n backups_to_keep_local: 0 # BACKUPS_TO_KEEP_LOCAL, how many latest local backup should be kept, 0 means all created backups will be stored on local disk, -1 means backup will keep after `create` but will delete after `create_remote` command\n backups_to_keep_remote: 0 # BACKUPS_TO_KEEP_REMOTE, how many latest backup should be kept on remote storage, 0 means all uploaded backups will be stored on remote storage.\n log_level: info # LOG_LEVEL, a choice from `debug`, `info`, `warning`, `error`\n allow_empty_backups: true # ALLOW_EMPTY_BACKUPS\n \ download_concurrency: 1 # DOWNLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES / 2))\n upload_concurrency: 1 # UPLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES / 2))\n download_max_bytes_per_second: 0 # DOWNLOAD_MAX_BYTES_PER_SECOND, 0 means no throttling\n upload_max_bytes_per_second: 0 # UPLOAD_MAX_BYTES_PER_SECOND, 0 means no throttling\n object_disk_server_side_copy_concurrency: 32\n allow_object_disk_streaming: false\n # restore schema on cluster is alway run by `INIT_CLUSTER_NAME` cluster of clickhouse, when schema restore, the ddl only runs on first pod of first shard\n restore_schema_on_cluster: \"\" # RESTORE_SCHEMA_ON_CLUSTER, execute all schema related SQL queries with `ON CLUSTER` clause as Distributed DDL. This isn't applicable when `use_embedded_backup_restore: true`\n upload_by_part: true # UPLOAD_BY_PART\n download_by_part: true # DOWNLOAD_BY_PART\n use_resumable_state: true # USE_RESUMABLE_STATE, allow resume upload and download according to the .resumable file. Resumable state is not supported for custom method in remote storage.\n restore_database_mapping: {} # RESTORE_DATABASE_MAPPING, like \"src_db1:target_db1,src_db2:target_db2\", restore rules from backup databases to target databases, which is useful when changing destination database, all atomic tables will be created with new UUIDs.\n restore_table_mapping: {} # RESTORE_TABLE_MAPPING, like \"src_table1:target_table1,src_table2:target_table2\" restore rules from backup tables to target tables, which is useful when changing destination tables.\n retries_on_failure: 3 # RETRIES_ON_FAILURE, how many times to retry after a failure during upload or download\n retries_pause: 5s # RETRIES_PAUSE, duration time to pause after each download or upload failure\n \ watch_interval: 1h # WATCH_INTERVAL, use only for `watch` command, backup will create every 1h\n full_interval: 24h # FULL_INTERVAL, use only for `watch` command, full backup will create every 24h\n watch_backup_name_template: \"shard{shard}-{type}-{time:20060102150405}\" # WATCH_BACKUP_NAME_TEMPLATE, used only for `watch` command, macros values will apply from `system.macros` for time:XXX, look format in https://go.dev/src/time/format.go\n \ sharded_operation_mode: none # SHARDED_OPERATION_MODE, how different replicas will shard backing up data for tables. Options are: none (no sharding), table (table granularity), database (database granularity), first-replica (on the lexicographically sorted first active replica). If left empty, then the \"none\" option will be set as default.\n cpu_nice_priority: 15 # CPU niceness priority, to allow throttling CPU intensive operation, more details https://manpages.ubuntu.com/manpages/xenial/man1/nice.1.html\n \ io_nice_priority: \"idle\" # IO niceness priority, to allow throttling DISK intensive operation, more details https://manpages.ubuntu.com/manpages/xenial/man1/ionice.1.html\n \ rbac_backup_always: true # always, backup RBAC objects\n rbac_resolve_conflicts: \"recreate\" # action, when RBAC object with the same name already exists, allow \"recreate\", \"ignore\", \"fail\" values\nclickhouse:\n username: default # CLICKHOUSE_USERNAME\n password: \"\" # CLICKHOUSE_PASSWORD\n host: localhost # CLICKHOUSE_HOST, To make backup data `clickhouse-backup` requires access to the same file system as clickhouse-server, so `host` should localhost or address of another docker container on the same machine, or IP address bound to some network interface on the same host.\n port: 9000 # CLICKHOUSE_PORT, don't use 8123, clickhouse-backup doesn't support HTTP protocol\n disk_mapping: {} # CLICKHOUSE_DISK_MAPPING, use this mapping when your `system.disks` are different between the source and destination clusters during backup and restore process. The format for this env variable is \"disk_name1:disk_path1,disk_name2:disk_path2\". For YAML please continue using map syntax. If destination disk is different from source backup disk then you need to specify the destination disk in the config file: disk_mapping: disk_destination: /var/lib/clickhouse/disks/destination `disk_destination` needs to be referenced in backup (source config), and all names from this map (`disk:path`) shall exist in `system.disks` on destination server. During download of the backup from remote location (s3), if `name` is not present in `disk_mapping` (on the destination server config too) then `default` disk path will used for download. `disk_mapping` is used to understand during download where downloaded parts shall be unpacked (which disk) on destination server and where to search for data parts directories during restore.\n skip_tables: # CLICKHOUSE_SKIP_TABLES, the list of tables (pattern are allowed) which are ignored during backup and restore process The format for this env variable is \"pattern1,pattern2,pattern3\". For YAML please continue using list syntax\n \ - system.*\n - INFORMATION_SCHEMA.*\n - information_schema.*\n skip_table_engines: [] # CLICKHOUSE_SKIP_TABLE_ENGINES, the list of tables engines which are ignored during backup, upload, download, restore process The format for this env variable is \"Engine1,Engine2,engine3\". For YAML please continue using list syntax\n \ skip_disks: [] # CLICKHOUSE_SKIP_DISKS, list of disk names which are ignored during create, upload, download and restore command The format for this env variable is \"Engine1,Engine2,engine3\". For YAML please continue using list syntax\n skip_disk_types: [] # CLICKHOUSE_SKIP_DISK_TYPES, list of disk types which are ignored during create, upload, download and restore command The format for this env variable is \"Engine1,Engine2,engine3\". For YAML please continue using list syntax\n timeout: 5m # CLICKHOUSE_TIMEOUT\n freeze_by_part: false # CLICKHOUSE_FREEZE_BY_PART, allow freezing by part instead of freezing the whole table\n freeze_by_part_where: \"\" # CLICKHOUSE_FREEZE_BY_PART_WHERE, allow parts filtering during freezing when freeze_by_part: true\n secure: false # CLICKHOUSE_SECURE, use TLS encryption for connection\n skip_verify: false # CLICKHOUSE_SKIP_VERIFY, skip certificate verification and allow potential certificate warnings\n sync_replicated_tables: true # CLICKHOUSE_SYNC_REPLICATED_TABLES\n \ tls_key: \"\" # CLICKHOUSE_TLS_KEY, filename with TLS key file\n tls_cert: \"\" # CLICKHOUSE_TLS_CERT, filename with TLS certificate file\n tls_ca: \"\" # CLICKHOUSE_TLS_CA, filename with TLS custom authority file\n log_sql_queries: true # CLICKHOUSE_LOG_SQL_QUERIES, logging `clickhouse-backup` SQL queries on `info` level, when true, `debug` level when false\n debug: false # CLICKHOUSE_DEBUG\n \ config_dir: \"/opt/bitnami/clickhouse/etc\" # CLICKHOUSE_CONFIG_DIR\n restart_command: \"sql:SYSTEM SHUTDOWN\" # CLICKHOUSE_RESTART_COMMAND, use this command when restoring with --rbac, --rbac-only or --configs, --configs-only options will split command by ; and execute one by one, all errors will logged and ignore available prefixes - sql: will execute SQL query - exec: will execute command via shell\n ignore_not_exists_error_during_freeze: true # CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE, helps to avoid backup failures when running frequent CREATE / DROP tables and databases during backup, `clickhouse-backup` will ignore `code: 60` and `code: 81` errors during execution of `ALTER TABLE ... FREEZE`\n check_replicas_before_attach: true # CLICKHOUSE_CHECK_REPLICAS_BEFORE_ATTACH, helps avoiding concurrent ATTACH PART execution when restoring ReplicatedMergeTree tables\n default_replica_path: \"/clickhouse/tables/{layer}/{shard}/{database}/{table}\" # CLICKHOUSE_DEFAULT_REPLICA_PATH, will use during restore Replicated tables without macros in replication_path if replica already exists, to avoid restoring conflicts\n default_replica_name: \"{replica}\" # CLICKHOUSE_DEFAULT_REPLICA_NAME, will use during restore Replicated tables without macros in replica_name if replica already exists, to avoid restoring conflicts\n use_embedded_backup_restore: false # CLICKHOUSE_USE_EMBEDDED_BACKUP_RESTORE, use BACKUP / RESTORE SQL statements instead of regular SQL queries to use features of modern ClickHouse server versions\n embedded_backup_disk: \"\" # CLICKHOUSE_EMBEDDED_BACKUP_DISK - disk from system.disks which will use when `use_embedded_backup_restore: true`\n \ backup_mutations: true # CLICKHOUSE_BACKUP_MUTATIONS, allow backup mutations from system.mutations WHERE is_done=0 and apply it during restore\n restore_as_attach: false # CLICKHOUSE_RESTORE_AS_ATTACH, allow restore tables which have inconsistent data parts structure and mutations in progress\n check_parts_columns: true # CLICKHOUSE_CHECK_PARTS_COLUMNS, check data types from system.parts_columns during create backup to guarantee mutation is complete\n max_connections: 0 # CLICKHOUSE_MAX_CONNECTIONS, how many parallel connections could be opened during operations\ns3:\n access_key: \"\" # S3_ACCESS_KEY\n secret_key: \"\" # S3_SECRET_KEY\n bucket: \"\" # S3_BUCKET\n endpoint: \"\" # S3_ENDPOINT\n \ region: us-east-1 # S3_REGION\n acl: private # S3_ACL, AWS changed S3 defaults in April 2023 so that all new buckets have ACL disabled: https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/ They also recommend that ACLs are disabled: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ensure-object-ownership.html use `acl: \"\"` if you see \"api error AccessControlListNotSupported: The bucket does not allow ACLs\"\n assume_role_arn: \"\" # S3_ASSUME_ROLE_ARN\n force_path_style: false # S3_FORCE_PATH_STYLE\n path: \"\" # S3_PATH, `system.macros` values can be applied as {macro_name}\n object_disk_path: \"\" # S3_OBJECT_DISK_PATH, path for backup of part from clickhouse object disks, if object disks present in clickhouse, then shall not be zero and shall not be prefixed by `path`\n \ disable_ssl: false # S3_DISABLE_SSL\n compression_level: 1 # S3_COMPRESSION_LEVEL\n \ compression_format: tar # S3_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is look at details in https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html\n \ sse: \"\" # S3_SSE, empty (default), AES256, or aws:kms\n sse_customer_algorithm: \"\" # S3_SSE_CUSTOMER_ALGORITHM, encryption algorithm, for example, AES256\n \ sse_customer_key: \"\" # S3_SSE_CUSTOMER_KEY, customer-provided encryption key use `openssl rand 32 > aws_sse.key` and `cat aws_sse.key | base64`\n sse_customer_key_md5: \"\" # S3_SSE_CUSTOMER_KEY_MD5, 128-bit MD5 digest of the encryption key according to RFC 1321 use `cat aws_sse.key | openssl dgst -md5 -binary | base64`\n sse_kms_key_id: \"\" # S3_SSE_KMS_KEY_ID, if S3_SSE is aws:kms then specifies the ID of the Amazon Web Services Key Management Service\n sse_kms_encryption_context: \"\" # S3_SSE_KMS_ENCRYPTION_CONTEXT, base64-encoded UTF-8 string holding a JSON with the encryption context Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. This is a collection of non-secret key-value pairs that represent additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported only on operations with symmetric encryption KMS keys\n disable_cert_verification: false # S3_DISABLE_CERT_VERIFICATION\n use_custom_storage_class: false # S3_USE_CUSTOM_STORAGE_CLASS\n \ storage_class: STANDARD # S3_STORAGE_CLASS, by default allow only from list https://github.com/aws/aws-sdk-go-v2/blob/main/service/s3/types/enums.go#L787-L799\n \ concurrency: 1 # S3_CONCURRENCY\n max_parts_count: 4000 # S3_MAX_PARTS_COUNT, number of parts for S3 multipart uploads\n allow_multipart_download: false # S3_ALLOW_MULTIPART_DOWNLOAD, allow faster multipart download speed, but will require additional disk space, download_concurrency * part size in worst case\n \ checksum_algorithm: \"\" # S3_CHECKSUM_ALGORITHM, use it when you use object lock which allow to avoid delete keys from bucket until some timeout after creation, use CRC32 as fastest\n object_labels: {} # S3_OBJECT_LABELS, allow setup metadata for each object during upload, use {macro_name} from system.macros and {backupName} for current backup name The format for this env variable is \"key1:value1,key2:value2\". For YAML please continue using map syntax\n custom_storage_class_map: {} # S3_CUSTOM_STORAGE_CLASS_MAP, allow setup storage class depending on the backup name regexp pattern, format nameRegexp > className\n request_payer: \"\" # S3_REQUEST_PAYER, define who will pay to request, look https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html for details, possible values requester, if empty then bucket owner\n debug: false # S3_DEBUG\nEOF\n\texport CLICKHOUSE_BACKUP_CONFIG=\"$clickhouse_backup_config\"\n}\n\nfunction getToolConfigValue() {\n\tlocal var=$1\n\tcat \"$toolConfig\" | grep \"$var\" | awk '{print $NF}'\n}\n\nfunction set_clickhouse_backup_config_env() {\n\ttoolConfig=/etc/datasafed/datasafed.conf\n\tif [ ! -f ${toolConfig} ]; then\n\t\tDP_error_log \"Config file not found: ${toolConfig}\"\n\t\texit 1\n\tfi\n\n\tlocal provider=\"\"\n\tlocal access_key_id=\"\"\n\tlocal secret_access_key=\"\"\n\tlocal region=\"\"\n\tlocal endpoint=\"\"\n\tlocal bucket=\"\"\n\n\tIFS=$'\\n'\n\tfor line in $(cat ${toolConfig}); do\n\t\tline=$(eval echo $line)\n\t\tif [[ $line == \"access_key_id\"* ]]; then\n\t\t\taccess_key_id=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"secret_access_key\"* ]]; then\n\t\t\tsecret_access_key=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"region\"* ]]; then\n\t\t\tregion=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"endpoint\"* ]]; then\n\t\t\tendpoint=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"root\"* ]]; then\n\t\t\tbucket=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"chunk_size\"* ]]; then\n\t\t\tchunk_size=$(getToolConfigValue \"$line\")\n\t\telif [[ $line == \"provider\"* ]]; then\n\t\t\tprovider=$(getToolConfigValue \"$line\")\n\t\tfi\n\tdone\n\n\tif [[ ! $endpoint =~ ^https?:// ]]; then\n\t\tendpoint=\"https://${endpoint}\"\n\tfi\n\n\tif [[ \"$provider\" == \"Alibaba\" ]]; then\n\t\tregex='https?:\\/\\/oss-(.*?)\\.aliyuncs\\.com'\n\t\tif [[ \"$endpoint\" =~ $regex ]]; then\n\t\t\tregion=\"${BASH_REMATCH[1]}\"\n\t\t\tDP_log \"Extract region from $endpoint-> $region\"\n\t\telse\n\t\t\tDP_log \"Failed to extract region from endpoint: $endpoint\"\n\t\tfi\n\telif [[ \"$provider\" == \"TencentCOS\" ]]; then\n\t\tregex='https?:\\/\\/cos\\.(.*?)\\.myqcloud\\.com'\n\t\tif [[ \"$endpoint\" =~ $regex ]]; then\n\t\t\tregion=\"${BASH_REMATCH[1]}\"\n\t\t\tDP_log \"Extract region from $endpoint-> $region\"\n\t\telse\n\t\t\tDP_log \"Failed to extract region from endpoint: $endpoint\"\n\t\tfi\n\telif [[ \"$provider\" == \"Minio\" || \"$provider\" == \"RustFS\" ]]; then\n\t\texport S3_FORCE_PATH_STYLE=true\n\telse\n\t\techo \"Unsupported provider: $provider\"\n\tfi\n\n\texport S3_ACCESS_KEY=\"${access_key_id}\"\n\texport S3_SECRET_KEY=\"${secret_access_key}\"\n\texport S3_REGION=\"${region}\"\n\texport S3_ENDPOINT=\"${endpoint}\"\n\texport S3_BUCKET=\"${bucket}\"\n\texport S3_PART_SIZE=\"${chunk_size}\"\n\texport S3_PATH=\"${DP_BACKUP_BASE_PATH}\"\n\texport INIT_CLUSTER_NAME=\"${INIT_CLUSTER_NAME:-default}\"\n\texport RESTORE_SCHEMA_ON_CLUSTER=\"${INIT_CLUSTER_NAME}\"\n\texport CLICKHOUSE_HOST=\"${DP_DB_HOST}\"\n\texport CLICKHOUSE_USERNAME=\"${CLICKHOUSE_ADMIN_USER}\"\n\texport CLICKHOUSE_PASSWORD=\"${CLICKHOUSE_ADMIN_PASSWORD}\"\n\tif [[ \"${TLS_ENABLED:-false}\" == \"true\" ]]; then\n\t\texport CLICKHOUSE_SECURE=true\n\t\texport CLICKHOUSE_PORT=\"${CLICKHOUSE_TCP_SECURE_PORT:-9440}\"\n\t\texport CLICKHOUSE_TLS_CA=\"/etc/pki/tls/ca.pem\"\n\t\texport CLICKHOUSE_TLS_CERT=\"/etc/pki/tls/cert.pem\"\n\t\texport CLICKHOUSE_TLS_KEY=\"/etc/pki/tls/key.pem\"\n\t\texport CLICKHOUSE_SKIP_VERIFY=true\n\tfi\n\tDP_log \"Dynamic environment variables for clickhouse-backup have been set.\"\n}\n\nfunction ch_query() {\n\tlocal query=\"$1\"\n\tlocal ch_port=\"${CLICKHOUSE_PORT:-9000}\"\n\tlocal ch_args=(--user \"${CLICKHOUSE_USERNAME}\" --password \"${CLICKHOUSE_PASSWORD}\" --host \"${CLICKHOUSE_HOST}\" --port \"$ch_port\" --connect_timeout=5)\n\tclickhouse-client \"${ch_args[@]}\" --query \"$query\"\n}\n\nfunction download_backup() {\n\tlocal backup_name=\"$1\"\n\tclickhouse-backup download \"$backup_name\" || {\n\t\tDP_error_log \"Failed to download backup '$backup_name'\"\n\t\treturn 1\n\t}\n\tDP_log \"Downloading backup '$backup_name' from remote storage...\"\n\treturn 0\n}\n\nfunction fetch_backup() {\n\tlocal backup_name=$1\n\tif clickhouse-backup list local | grep -q \"$backup_name\"; then\n\t\tDP_log \"Local backup '$backup_name' found.\"\n\telse\n\t\tDP_log \"Local backup '$backup_name' not found. Downloading...\"\n\t\tdownload_backup \"$backup_name\" || {\n\t\t\tDP_error_log \"Failed to download backup '$backup_name'. Exiting.\"\n\t\t\texit 1\n\t\t}\n\t\tclickhouse-backup list local | grep -q \"$backup_name\" || {\n\t\t\tDP_error_log \"Backup '$backup_name' not found after download. Exiting.\"\n\t\t\texit 1\n\t\t}\n\tfi\n\tDP_log \"Backup '$backup_name' is available locally.\"\n}\n\nfunction delete_backups_except() {\n\tlocal latest_backup=$1\n\tDP_log \"delete backup except $latest_backup\"\n\tbackup_list=$(clickhouse-backup list)\n\techo \"$backup_list\" | awk '/local/ {print $1}' | while IFS= read -r backup_name; do\n\t\tif [ \"$backup_name\" != \"$latest_backup\" ]; then\n\t\t\tclickhouse-backup delete local \"$backup_name\" || {\n\t\t\t\tDP_error_log \"Clickhouse-backup delete local backup $backup_name FAILED\"\n\t\t\t}\n\t\tfi\n\tdone\n}\n\n# Save backup size info for DP status reporting\nfunction save_backup_size() {\n\tlocal shard_base_dir\n\tshard_base_dir=$(dirname \"${DP_BACKUP_BASE_PATH}\")\n\texport DATASAFED_BACKEND_BASE_PATH=\"$shard_base_dir\"\n\texport PATH=\"$PATH:$DP_DATASAFED_BIN_PATH\"\n\tlocal backup_size\n\tbackup_size=$(datasafed stat / | grep TotalSize | awk '{print $2}')\n\tDP_save_backup_status_info \"$backup_size\"\n}\n\n# Restore schema and wait for sync across shards\nfunction restore_schema_and_sync() {\n\tlocal backup_name=\"$1\"\n\tlocal mode_info=\"$2\"\n\tlocal schema_db=\"kubeblocks\"\n\tlocal schema_table=\"__restore_ready__\"\n\tlocal timeout=\"${RESTORE_SCHEMA_READY_TIMEOUT_SECONDS:-1800}\"\n\tlocal interval=\"${RESTORE_SCHEMA_READY_CHECK_INTERVAL_SECONDS:-5}\"\n\n\t# Determine if this pod should execute schema restore\n\tlocal should_restore_schema=false\n\tif [[ \"$mode_info\" == \"standalone\" ]]; then\n\t\tshould_restore_schema=true\n\telse\n\t\tlocal first_component=\"${mode_info#cluster:}\"\n\t\t[[ \"${CURRENT_SHARD_COMPONENT_SHORT_NAME}\" == \"$first_component\" ]] && should_restore_schema=true\n\tfi\n\n\tif [[ \"$should_restore_schema\" == \"true\" ]]; then\n\t\t# Standalone: unset ON CLUSTER mode to avoid ZK requirement\n\t\t[[ \"$mode_info\" == \"standalone\" ]] && unset RESTORE_SCHEMA_ON_CLUSTER\n\t\tclickhouse-backup restore_remote \"$backup_name\" --schema --rbac || {\n\t\t\tDP_error_log \"Clickhouse-backup restore_remote backup $backup_name FAILED\"\n\t\t\treturn 1\n\t\t}\n\t\t# Cluster mode: create marker table for cross-shard coordination\n\t\tif [[ \"$mode_info\" != \"standalone\" ]]; then\n\t\t\tch_query \"CREATE DATABASE IF NOT EXISTS \\`${schema_db}\\` ON CLUSTER \\`${INIT_CLUSTER_NAME}\\`\" || {\n\t\t\t\tDP_error_log \"Failed to create database ${schema_db}\"\n\t\t\t\treturn 1\n\t\t\t}\n\t\t\tch_query \"CREATE TABLE IF NOT EXISTS \\`${schema_db}\\`.\\`${schema_table}\\` ON CLUSTER \\`${INIT_CLUSTER_NAME}\\` (shard String, finished_at DateTime, backup_name String) ENGINE=TinyLog\" || {\n\t\t\t\tDP_error_log \"Failed to create schema ready marker\"\n\t\t\t\treturn 1\n\t\t\t}\n\t\tfi\n\telse\n\t\tDP_log \"Waiting for schema ready table on ${CLICKHOUSE_HOST}...\"\n\t\tlocal start=$(date +%s)\n\t\twhile true; do\n\t\t\tif [[ \"$(ch_query \"EXISTS TABLE \\`${schema_db}\\`.\\`${schema_table}\\`\")\" == \"1\" ]]; then\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\tlocal now=$(date +%s)\n\t\t\tif [[ $((now - start)) -ge $timeout ]]; then\n\t\t\t\tDP_error_log \"Timeout waiting for schema ready table on ${CLICKHOUSE_HOST}\"\n\t\t\t\treturn 1\n\t\t\tfi\n\t\t\tsleep \"$interval\"\n\t\tdone\n\tfi\n}\n\n# Full restore: schema + data + marker\nfunction do_restore() {\n\tlocal backup_name=\"$1\"\n\tlocal mode_info=\"$2\"\n\tlocal schema_db=\"kubeblocks\"\n\tlocal schema_table=\"__restore_ready__\"\n\n\t# Restore schema (first shard in cluster uses ON CLUSTER DDL)\n\trestore_schema_and_sync \"$backup_name\" \"$mode_info\" || return 1\n\n\t# Restore data\n\tclickhouse-backup restore_remote \"$backup_name\" --data || {\n\t\tDP_error_log \"Clickhouse-backup restore_remote --data FAILED\"\n\t\treturn 1\n\t}\n\n\t# Insert shard ready marker (cluster mode only)\n\tif [[ \"$mode_info\" != \"standalone\" ]]; then\n\t\tch_query \"INSERT INTO \\`${schema_db}\\`.\\`${schema_table}\\` (shard, finished_at, backup_name) VALUES ('${CURRENT_SHARD_COMPONENT_SHORT_NAME}', now(), '$backup_name')\" || {\n\t\t\tDP_error_log \"Failed to insert shard ready marker\"\n\t\t\treturn 1\n\t\t}\n\tfi\n}\n\n#!/bin/bash\nset -exo pipefail\n\n# Supports: standalone (single node) and cluster (multi-shard) topologies\n# Strategy: first shard restores schema with ON CLUSTER, others wait for sync\n\ntrap handle_exit EXIT\ngenerate_backup_config\nset_clickhouse_backup_config_env\n\nif [[ \"${CLICKHOUSE_SECURE}\" = \"true\" ]]; then\n\tDP_error_log \"ClickHouse restore does not support TLS\"\n\texit 1\nfi\n\n# 1. Detect topology mode: standalone (no ':' in FQDN) or cluster\nfirst_entry=\"${ALL_COMBINED_SHARDS_POD_FQDN_LIST%%,*}\"\nfirst_component=\"${first_entry%%:*}\"\nif [[ -z \"$first_component\" ]]; then\n\tDP_error_log \"Invalid ALL_COMBINED_SHARDS_POD_FQDN_LIST\"\n\texit 1\nfi\nif [[ \"$first_component\" == \"$first_entry\" ]]; then\n\tmode_info=\"standalone\"\n\tDP_log \"Standalone mode detected\"\nelse\n\tmode_info=\"cluster:$first_component\"\nfi\n\n# 2. Restore schema + data + marker\ndo_restore \"${DP_BACKUP_NAME}\" \"$mode_info\" || exit 1\n\n# 3. Cleanup local backups\ndelete_backups_except \"\"\n" env: - name: DP_BACKUP_NAME value: backup-ns-uwpgk-clkhouse-icopne-20260211191142 - name: DP_TARGET_RELATIVE_PATH value: clickhouse-7gx - name: DP_BACKUP_ROOT_PATH value: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse - name: DP_BACKUP_BASE_PATH value: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142/clickhouse-7gx - name: DP_BACKUP_STOP_TIME value: "2026-02-11T11:12:05Z" - name: RESTORE_SCHEMA_READY_TIMEOUT_SECONDS value: "1800" - name: RESTORE_SCHEMA_READY_CHECK_INTERVAL_SECONDS value: "5" - name: CLICKHOUSE_ADMIN_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-icopne-backup-clickhouse-ngn-account-admin - name: CURRENT_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: INIT_CLUSTER_NAME value: default - name: DP_DB_USER valueFrom: secretKeyRef: key: username name: clkhouse-icopne-backup-clickhouse-ngn-account-admin - name: DP_DB_PASSWORD valueFrom: secretKeyRef: key: password name: clkhouse-icopne-backup-clickhouse-ngn-account-admin - name: DP_DB_PORT value: "8001" - name: DP_DB_HOST value: clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless - name: DP_DATASAFED_BIN_PATH value: /bin/datasafed envFrom: - configMapRef: name: clkhouse-icopne-backup-clickhouse-ngn-env optional: false image: docker.io/apecloud/clickhouse-backup-full:2.6.42 imagePullPolicy: IfNotPresent name: restore resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /etc/clickhouse-client name: client-config - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xwzps readOnly: true - args: - |2 set -o errexit set -o nounset sleep_seconds="1" signal_file="/dp_downward/stop_restore_manager" if [ "$sleep_seconds" -le 0 ]; then sleep_seconds=2 fi while true; do if [ -f "$signal_file" ] && [ "$(cat "$signal_file")" = "true" ]; then break fi echo "waiting for other restore workloads, sleep ${sleep_seconds}s" sleep "$sleep_seconds" done echo "restore manager stopped" command: - sh - -c env: - name: DP_DATASAFED_BIN_PATH value: /bin/datasafed image: docker.io/apecloud/kubeblocks-tools:1.0.2 imagePullPolicy: IfNotPresent name: restore-manager resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /dp_downward name: downward-volume-sidecard - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xwzps readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: kbcli-test-registry-key initContainers: - command: - /bin/sh - -c - /scripts/install-datasafed.sh /bin/datasafed image: docker.io/apecloud/datasafed:0.2.3 imagePullPolicy: IfNotPresent name: dp-copy-datasafed resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: allowPrivilegeEscalation: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xwzps readOnly: true nodeName: aks-cicdamdpool-55976491-vmss000001 nodeSelector: kubernetes.io/hostname: aks-cicdamdpool-55976491-vmss000001 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: runAsUser: 0 serviceAccount: kubeblocks-dataprotection-worker serviceAccountName: kubeblocks-dataprotection-worker terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: data persistentVolumeClaim: claimName: data-clkhouse-icopne-backup-clickhouse-ngn-0 - configMap: defaultMode: 292 name: clkhouse-icopne-backup-clickhouse-ngn-clickhouse-client-tpl name: client-config - configMap: defaultMode: 292 name: clkhouse-icopne-backup-clickhouse-ngn-clickhouse-tpl name: config - downwardAPI: defaultMode: 420 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['dataprotection.kubeblocks.io/stop-restore-manager'] path: stop_restore_manager name: downward-volume-sidecard - name: dp-datasafed-config secret: defaultMode: 420 secretName: tool-config-backuprepo-kbcli-test-88dtkr - emptyDir: {} name: dp-datasafed-bin - name: kube-api-access-xwzps projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-02-11T11:15:30Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-02-11T11:15:24Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-02-11T11:15:23Z" reason: PodFailed status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2026-02-11T11:15:23Z" reason: PodFailed status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-02-11T11:15:23Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://ab18e7551cab12af38b78e7b84a7b4e7155801885ea5c2a93b369a72f1a316ab image: docker.io/apecloud/clickhouse-backup-full:2.6.42 imageID: docker.io/apecloud/clickhouse-backup-full@sha256:0dedf050bf78f889c2d6ed7120aae4df927c7816a72863ac017aba49c072af4e lastState: {} name: restore ready: false restartCount: 0 started: false state: terminated: containerID: containerd://ab18e7551cab12af38b78e7b84a7b4e7155801885ea5c2a93b369a72f1a316ab exitCode: 1 finishedAt: "2026-02-11T11:15:25Z" reason: Error startedAt: "2026-02-11T11:15:25Z" volumeMounts: - mountPath: /bitnami/clickhouse name: data - mountPath: /etc/clickhouse-client name: client-config - mountPath: /opt/bitnami/clickhouse/etc/conf.d name: config - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true recursiveReadOnly: Disabled - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xwzps readOnly: true recursiveReadOnly: Disabled - containerID: containerd://c2e5dffe215742912be09e59db1af965ed07040be91b4e1b98c0b1094c5eee48 image: docker.io/apecloud/kubeblocks-tools:1.0.2 imageID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea lastState: {} name: restore-manager ready: false restartCount: 0 started: false state: terminated: containerID: containerd://c2e5dffe215742912be09e59db1af965ed07040be91b4e1b98c0b1094c5eee48 exitCode: 0 finishedAt: "2026-02-11T11:15:28Z" reason: Completed startedAt: "2026-02-11T11:15:25Z" volumeMounts: - mountPath: /dp_downward name: downward-volume-sidecard - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true recursiveReadOnly: Disabled - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xwzps readOnly: true recursiveReadOnly: Disabled hostIP: 10.224.0.7 hostIPs: - ip: 10.224.0.7 initContainerStatuses: - containerID: containerd://c78f77a980f4096032568f89ce210705a5ad4279029b33c55c3d013274242398 image: docker.io/apecloud/datasafed:0.2.3 imageID: docker.io/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f lastState: {} name: dp-copy-datasafed ready: true restartCount: 0 started: false state: terminated: containerID: containerd://c78f77a980f4096032568f89ce210705a5ad4279029b33c55c3d013274242398 exitCode: 0 finishedAt: "2026-02-11T11:15:23Z" reason: Completed startedAt: "2026-02-11T11:15:23Z" volumeMounts: - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xwzps readOnly: true recursiveReadOnly: Disabled phase: Failed podIP: 10.244.4.223 podIPs: - ip: 10.244.4.223 qosClass: BestEffort startTime: "2026-02-11T11:15:23Z" ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe pod restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8--------------------------------------  `kubectl describe pod restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn --namespace ns-uwpgk `(B  Name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn Namespace: ns-uwpgk Priority: 0 Service Account: kubeblocks-dataprotection-worker Node: aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Start Time: Wed, 11 Feb 2026 19:13:51 +0800 Labels: app.kubernetes.io/managed-by=kubeblocks-dataprotection batch.kubernetes.io/controller-uid=b0539470-d058-40af-94c5-e989ce4bf597 batch.kubernetes.io/job-name=restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 controller-uid=b0539470-d058-40af-94c5-e989ce4bf597 dataprotection.kubeblocks.io/restore=clkhouse-icopne-backup-clickhouse-ngn-5362cf29-postready job-name=restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 Annotations: dataprotection.kubeblocks.io/stop-restore-manager: true Status: Failed IP: 10.244.4.222 IPs: IP: 10.244.4.222 Controlled By: Job/restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 Init Containers: dp-copy-datasafed: Container ID: containerd://a8bc69d15939d90a4e4fce91811eb07094da4d46e000e7aad8b524716ddc67dd Image: docker.io/apecloud/datasafed:0.2.3 Image ID: docker.io/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f Port: Host Port: Command: /bin/sh -c /scripts/install-datasafed.sh /bin/datasafed State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 11 Feb 2026 19:13:51 +0800 Finished: Wed, 11 Feb 2026 19:13:51 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment: Mounts: /bin/datasafed from dp-datasafed-bin (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42mtd (ro) Containers: restore: Container ID: containerd://11d5f9dafb7f3dd5542af53b79c4d467cbab2d1425f3b12bd864d7b8821b3a29 Image: docker.io/apecloud/clickhouse-backup-full:2.6.42 Image ID: docker.io/apecloud/clickhouse-backup-full@sha256:0dedf050bf78f889c2d6ed7120aae4df927c7816a72863ac017aba49c072af4e Port: Host Port: Command: bash -c #!/bin/bash # log info file function DP_log() { msg=$1 local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S') echo "${curr_date} INFO: $msg" } # log error info function DP_error_log() { msg=$1 local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S') echo "${curr_date} ERROR: $msg" } # Get file names without extensions based on the incoming file path function DP_get_file_name_without_ext() { local fileName=$1 local file_without_ext=${fileName%.*} echo $(basename ${file_without_ext}) } # Save backup status info file for syncing progress. # timeFormat: %Y-%m-%dT%H:%M:%SZ function DP_save_backup_status_info() { local totalSize=$1 local startTime=$2 local stopTime=$3 local timeZone=$4 local extras=$5 local timeZoneStr="" if [ ! -z ${timeZone} ]; then timeZoneStr=",\"timeZone\":\"${timeZone}\"" fi if [ -z "${stopTime}" ]; then echo "{\"totalSize\":\"${totalSize}\"}" >${DP_BACKUP_INFO_FILE} elif [ -z "${startTime}" ]; then echo "{\"totalSize\":\"${totalSize}\",\"extras\":[${extras}],\"timeRange\":{\"end\":\"${stopTime}\"${timeZoneStr}}}" >${DP_BACKUP_INFO_FILE} else echo "{\"totalSize\":\"${totalSize}\",\"extras\":[${extras}],\"timeRange\":{\"start\":\"${startTime}\",\"end\":\"${stopTime}\"${timeZoneStr}}}" >${DP_BACKUP_INFO_FILE} fi } # Clean up expired logfiles. # Default interval is 60s # Default rootPath is / function DP_purge_expired_files() { local currentUnix="${1:?missing current unix}" local last_purge_time="${2:?missing last_purge_time}" local root_path=${3:-"/"} local interval_seconds=${4:-60} local diff_time=$((${currentUnix} - ${last_purge_time})) if [[ -z ${DP_TTL_SECONDS} || ${diff_time} -lt ${interval_seconds} ]]; then return fi expiredUnix=$((${currentUnix} - ${DP_TTL_SECONDS})) files=$(datasafed list -f --recursive --older-than ${expiredUnix} ${root_path}) for file in "${files[@]}"; do datasafed rm "$file" echo "$file" done } # analyze the start time of the earliest file from the datasafed backend. # Then record the file name into dp_oldest_file.info. # If the oldest file is no changed, exit the process. # This can save traffic consumption. function DP_analyze_start_time_from_datasafed() { local oldest_file="${1:?missing oldest file}" local get_start_time_from_file="${2:?missing get_start_time_from_file function}" local datasafed_pull="${3:?missing datasafed_pull function}" local info_file="${KB_BACKUP_WORKDIR}/dp_oldest_file.info" mkdir -p ${KB_BACKUP_WORKDIR} && cd ${KB_BACKUP_WORKDIR} if [ -f ${info_file} ]; then last_oldest_file=$(cat ${info_file}) last_oldest_file_name=$(DP_get_file_name_without_ext ${last_oldest_file}) if [ "$last_oldest_file" == "${oldest_file}" ]; then # oldest file no changed. ${get_start_time_from_file} $last_oldest_file_name return fi # remove last oldest file if [ -f ${last_oldest_file_name} ]; then rm -rf ${last_oldest_file_name} fi fi # pull file ${datasafed_pull} ${oldest_file} # record last oldest file echo ${oldest_file} >${info_file} oldest_file_name=$(DP_get_file_name_without_ext ${oldest_file}) ${get_start_time_from_file} ${oldest_file_name} } # get the timeZone offset for location, such as Asia/Shanghai function getTimeZoneOffset() { local timeZone=${1:?missing time zone} if [[ $timeZone == "+"* ]] || [[ $timeZone == "-"* ]]; then echo ${timeZone} return fi local currTime=$(TZ=UTC date) local utcHour=$(TZ=UTC date -d "${currTime}" +"%H") local zoneHour=$(TZ=${timeZone} date -d "${currTime}" +"%H") local offset=$((${zoneHour} - ${utcHour})) if [ $offset -eq 0 ]; then return fi symbol="+" if [ $offset -lt 0 ]; then symbol="-" && offset=${offset:1} fi if [ $offset -lt 10 ]; then offset="0${offset}" fi echo "${symbol}${offset}:00" } # if the script exits with a non-zero exit code, touch a file to indicate that the backup failed, # the sync progress container will check this file and exit if it exists function handle_exit() { exit_code=$? if [ "$exit_code" -ne 0 ]; then DP_error_log "Backup failed with exit code $exit_code" touch "${DP_BACKUP_INFO_FILE}.exit" exit 1 fi } function generate_backup_config() { clickhouse_backup_config=$(mktemp) || { DP_error_log "Failed to create temporary file" return 1 } # whole config see https://github.com/Altinity/clickhouse-backup cat >"$clickhouse_backup_config" <<'EOF' general: remote_storage: s3 # REMOTE_STORAGE, choice from: `azblob`,`gcs`,`s3`, etc; if `none` then `upload` and `download` commands will fail. max_file_size: 1125899906842624 # MAX_FILE_SIZE, 1PB by default, useless when upload_by_part is true, use to split data parts files by archives backups_to_keep_local: 0 # BACKUPS_TO_KEEP_LOCAL, how many latest local backup should be kept, 0 means all created backups will be stored on local disk, -1 means backup will keep after `create` but will delete after `create_remote` command backups_to_keep_remote: 0 # BACKUPS_TO_KEEP_REMOTE, how many latest backup should be kept on remote storage, 0 means all uploaded backups will be stored on remote storage. log_level: info # LOG_LEVEL, a choice from `debug`, `info`, `warning`, `error` allow_empty_backups: true # ALLOW_EMPTY_BACKUPS download_concurrency: 1 # DOWNLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES / 2)) upload_concurrency: 1 # UPLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES / 2)) download_max_bytes_per_second: 0 # DOWNLOAD_MAX_BYTES_PER_SECOND, 0 means no throttling upload_max_bytes_per_second: 0 # UPLOAD_MAX_BYTES_PER_SECOND, 0 means no throttling object_disk_server_side_copy_concurrency: 32 allow_object_disk_streaming: false # restore schema on cluster is alway run by `INIT_CLUSTER_NAME` cluster of clickhouse, when schema restore, the ddl only runs on first pod of first shard restore_schema_on_cluster: "" # RESTORE_SCHEMA_ON_CLUSTER, execute all schema related SQL queries with `ON CLUSTER` clause as Distributed DDL. This isn't applicable when `use_embedded_backup_restore: true` upload_by_part: true # UPLOAD_BY_PART download_by_part: true # DOWNLOAD_BY_PART use_resumable_state: true # USE_RESUMABLE_STATE, allow resume upload and download according to the .resumable file. Resumable state is not supported for custom method in remote storage. restore_database_mapping: {} # RESTORE_DATABASE_MAPPING, like "src_db1:target_db1,src_db2:target_db2", restore rules from backup databases to target databases, which is useful when changing destination database, all atomic tables will be created with new UUIDs. restore_table_mapping: {} # RESTORE_TABLE_MAPPING, like "src_table1:target_table1,src_table2:target_table2" restore rules from backup tables to target tables, which is useful when changing destination tables. retries_on_failure: 3 # RETRIES_ON_FAILURE, how many times to retry after a failure during upload or download retries_pause: 5s # RETRIES_PAUSE, duration time to pause after each download or upload failure watch_interval: 1h # WATCH_INTERVAL, use only for `watch` command, backup will create every 1h full_interval: 24h # FULL_INTERVAL, use only for `watch` command, full backup will create every 24h watch_backup_name_template: "shard{shard}-{type}-{time:20060102150405}" # WATCH_BACKUP_NAME_TEMPLATE, used only for `watch` command, macros values will apply from `system.macros` for time:XXX, look format in https://go.dev/src/time/format.go sharded_operation_mode: none # SHARDED_OPERATION_MODE, how different replicas will shard backing up data for tables. Options are: none (no sharding), table (table granularity), database (database granularity), first-replica (on the lexicographically sorted first active replica). If left empty, then the "none" option will be set as default. cpu_nice_priority: 15 # CPU niceness priority, to allow throttling CPU intensive operation, more details https://manpages.ubuntu.com/manpages/xenial/man1/nice.1.html io_nice_priority: "idle" # IO niceness priority, to allow throttling DISK intensive operation, more details https://manpages.ubuntu.com/manpages/xenial/man1/ionice.1.html rbac_backup_always: true # always, backup RBAC objects rbac_resolve_conflicts: "recreate" # action, when RBAC object with the same name already exists, allow "recreate", "ignore", "fail" values clickhouse: username: default # CLICKHOUSE_USERNAME password: "" # CLICKHOUSE_PASSWORD host: localhost # CLICKHOUSE_HOST, To make backup data `clickhouse-backup` requires access to the same file system as clickhouse-server, so `host` should localhost or address of another docker container on the same machine, or IP address bound to some network interface on the same host. port: 9000 # CLICKHOUSE_PORT, don't use 8123, clickhouse-backup doesn't support HTTP protocol disk_mapping: {} # CLICKHOUSE_DISK_MAPPING, use this mapping when your `system.disks` are different between the source and destination clusters during backup and restore process. The format for this env variable is "disk_name1:disk_path1,disk_name2:disk_path2". For YAML please continue using map syntax. If destination disk is different from source backup disk then you need to specify the destination disk in the config file: disk_mapping: disk_destination: /var/lib/clickhouse/disks/destination `disk_destination` needs to be referenced in backup (source config), and all names from this map (`disk:path`) shall exist in `system.disks` on destination server. During download of the backup from remote location (s3), if `name` is not present in `disk_mapping` (on the destination server config too) then `default` disk path will used for download. `disk_mapping` is used to understand during download where downloaded parts shall be unpacked (which disk) on destination server and where to search for data parts directories during restore. skip_tables: # CLICKHOUSE_SKIP_TABLES, the list of tables (pattern are allowed) which are ignored during backup and restore process The format for this env variable is "pattern1,pattern2,pattern3". For YAML please continue using list syntax - system.* - INFORMATION_SCHEMA.* - information_schema.* skip_table_engines: [] # CLICKHOUSE_SKIP_TABLE_ENGINES, the list of tables engines which are ignored during backup, upload, download, restore process The format for this env variable is "Engine1,Engine2,engine3". For YAML please continue using list syntax skip_disks: [] # CLICKHOUSE_SKIP_DISKS, list of disk names which are ignored during create, upload, download and restore command The format for this env variable is "Engine1,Engine2,engine3". For YAML please continue using list syntax skip_disk_types: [] # CLICKHOUSE_SKIP_DISK_TYPES, list of disk types which are ignored during create, upload, download and restore command The format for this env variable is "Engine1,Engine2,engine3". For YAML please continue using list syntax timeout: 5m # CLICKHOUSE_TIMEOUT freeze_by_part: false # CLICKHOUSE_FREEZE_BY_PART, allow freezing by part instead of freezing the whole table freeze_by_part_where: "" # CLICKHOUSE_FREEZE_BY_PART_WHERE, allow parts filtering during freezing when freeze_by_part: true secure: false # CLICKHOUSE_SECURE, use TLS encryption for connection skip_verify: false # CLICKHOUSE_SKIP_VERIFY, skip certificate verification and allow potential certificate warnings sync_replicated_tables: true # CLICKHOUSE_SYNC_REPLICATED_TABLES tls_key: "" # CLICKHOUSE_TLS_KEY, filename with TLS key file tls_cert: "" # CLICKHOUSE_TLS_CERT, filename with TLS certificate file tls_ca: "" # CLICKHOUSE_TLS_CA, filename with TLS custom authority file log_sql_queries: true # CLICKHOUSE_LOG_SQL_QUERIES, logging `clickhouse-backup` SQL queries on `info` level, when true, `debug` level when false debug: false # CLICKHOUSE_DEBUG config_dir: "/opt/bitnami/clickhouse/etc" # CLICKHOUSE_CONFIG_DIR restart_command: "sql:SYSTEM SHUTDOWN" # CLICKHOUSE_RESTART_COMMAND, use this command when restoring with --rbac, --rbac-only or --configs, --configs-only options will split command by ; and execute one by one, all errors will logged and ignore available prefixes - sql: will execute SQL query - exec: will execute command via shell ignore_not_exists_error_during_freeze: true # CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE, helps to avoid backup failures when running frequent CREATE / DROP tables and databases during backup, `clickhouse-backup` will ignore `code: 60` and `code: 81` errors during execution of `ALTER TABLE ... FREEZE` check_replicas_before_attach: true # CLICKHOUSE_CHECK_REPLICAS_BEFORE_ATTACH, helps avoiding concurrent ATTACH PART execution when restoring ReplicatedMergeTree tables default_replica_path: "/clickhouse/tables/{layer}/{shard}/{database}/{table}" # CLICKHOUSE_DEFAULT_REPLICA_PATH, will use during restore Replicated tables without macros in replication_path if replica already exists, to avoid restoring conflicts default_replica_name: "{replica}" # CLICKHOUSE_DEFAULT_REPLICA_NAME, will use during restore Replicated tables without macros in replica_name if replica already exists, to avoid restoring conflicts use_embedded_backup_restore: false # CLICKHOUSE_USE_EMBEDDED_BACKUP_RESTORE, use BACKUP / RESTORE SQL statements instead of regular SQL queries to use features of modern ClickHouse server versions embedded_backup_disk: "" # CLICKHOUSE_EMBEDDED_BACKUP_DISK - disk from system.disks which will use when `use_embedded_backup_restore: true` backup_mutations: true # CLICKHOUSE_BACKUP_MUTATIONS, allow backup mutations from system.mutations WHERE is_done=0 and apply it during restore restore_as_attach: false # CLICKHOUSE_RESTORE_AS_ATTACH, allow restore tables which have inconsistent data parts structure and mutations in progress check_parts_columns: true # CLICKHOUSE_CHECK_PARTS_COLUMNS, check data types from system.parts_columns during create backup to guarantee mutation is complete max_connections: 0 # CLICKHOUSE_MAX_CONNECTIONS, how many parallel connections could be opened during operations s3: access_key: "" # S3_ACCESS_KEY secret_key: "" # S3_SECRET_KEY bucket: "" # S3_BUCKET endpoint: "" # S3_ENDPOINT region: us-east-1 # S3_REGION acl: private # S3_ACL, AWS changed S3 defaults in April 2023 so that all new buckets have ACL disabled: https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/ They also recommend that ACLs are disabled: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ensure-object-ownership.html use `acl: ""` if you see "api error AccessControlListNotSupported: The bucket does not allow ACLs" assume_role_arn: "" # S3_ASSUME_ROLE_ARN force_path_style: false # S3_FORCE_PATH_STYLE path: "" # S3_PATH, `system.macros` values can be applied as {macro_name} object_disk_path: "" # S3_OBJECT_DISK_PATH, path for backup of part from clickhouse object disks, if object disks present in clickhouse, then shall not be zero and shall not be prefixed by `path` disable_ssl: false # S3_DISABLE_SSL compression_level: 1 # S3_COMPRESSION_LEVEL compression_format: tar # S3_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is look at details in https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html sse: "" # S3_SSE, empty (default), AES256, or aws:kms sse_customer_algorithm: "" # S3_SSE_CUSTOMER_ALGORITHM, encryption algorithm, for example, AES256 sse_customer_key: "" # S3_SSE_CUSTOMER_KEY, customer-provided encryption key use `openssl rand 32 > aws_sse.key` and `cat aws_sse.key | base64` sse_customer_key_md5: "" # S3_SSE_CUSTOMER_KEY_MD5, 128-bit MD5 digest of the encryption key according to RFC 1321 use `cat aws_sse.key | openssl dgst -md5 -binary | base64` sse_kms_key_id: "" # S3_SSE_KMS_KEY_ID, if S3_SSE is aws:kms then specifies the ID of the Amazon Web Services Key Management Service sse_kms_encryption_context: "" # S3_SSE_KMS_ENCRYPTION_CONTEXT, base64-encoded UTF-8 string holding a JSON with the encryption context Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. This is a collection of non-secret key-value pairs that represent additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported only on operations with symmetric encryption KMS keys disable_cert_verification: false # S3_DISABLE_CERT_VERIFICATION use_custom_storage_class: false # S3_USE_CUSTOM_STORAGE_CLASS storage_class: STANDARD # S3_STORAGE_CLASS, by default allow only from list https://github.com/aws/aws-sdk-go-v2/blob/main/service/s3/types/enums.go#L787-L799 concurrency: 1 # S3_CONCURRENCY max_parts_count: 4000 # S3_MAX_PARTS_COUNT, number of parts for S3 multipart uploads allow_multipart_download: false # S3_ALLOW_MULTIPART_DOWNLOAD, allow faster multipart download speed, but will require additional disk space, download_concurrency * part size in worst case checksum_algorithm: "" # S3_CHECKSUM_ALGORITHM, use it when you use object lock which allow to avoid delete keys from bucket until some timeout after creation, use CRC32 as fastest object_labels: {} # S3_OBJECT_LABELS, allow setup metadata for each object during upload, use {macro_name} from system.macros and {backupName} for current backup name The format for this env variable is "key1:value1,key2:value2". For YAML please continue using map syntax custom_storage_class_map: {} # S3_CUSTOM_STORAGE_CLASS_MAP, allow setup storage class depending on the backup name regexp pattern, format nameRegexp > className request_payer: "" # S3_REQUEST_PAYER, define who will pay to request, look https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html for details, possible values requester, if empty then bucket owner debug: false # S3_DEBUG EOF export CLICKHOUSE_BACKUP_CONFIG="$clickhouse_backup_config" } function getToolConfigValue() { local var=$1 cat "$toolConfig" | grep "$var" | awk '{print $NF}' } function set_clickhouse_backup_config_env() { toolConfig=/etc/datasafed/datasafed.conf if [ ! -f ${toolConfig} ]; then DP_error_log "Config file not found: ${toolConfig}" exit 1 fi local provider="" local access_key_id="" local secret_access_key="" local region="" local endpoint="" local bucket="" IFS=$'\n' for line in $(cat ${toolConfig}); do line=$(eval echo $line) if [[ $line == "access_key_id"* ]]; then access_key_id=$(getToolConfigValue "$line") elif [[ $line == "secret_access_key"* ]]; then secret_access_key=$(getToolConfigValue "$line") elif [[ $line == "region"* ]]; then region=$(getToolConfigValue "$line") elif [[ $line == "endpoint"* ]]; then endpoint=$(getToolConfigValue "$line") elif [[ $line == "root"* ]]; then bucket=$(getToolConfigValue "$line") elif [[ $line == "chunk_size"* ]]; then chunk_size=$(getToolConfigValue "$line") elif [[ $line == "provider"* ]]; then provider=$(getToolConfigValue "$line") fi done if [[ ! $endpoint =~ ^https?:// ]]; then endpoint="https://${endpoint}" fi if [[ "$provider" == "Alibaba" ]]; then regex='https?:\/\/oss-(.*?)\.aliyuncs\.com' if [[ "$endpoint" =~ $regex ]]; then region="${BASH_REMATCH[1]}" DP_log "Extract region from $endpoint-> $region" else DP_log "Failed to extract region from endpoint: $endpoint" fi elif [[ "$provider" == "TencentCOS" ]]; then regex='https?:\/\/cos\.(.*?)\.myqcloud\.com' if [[ "$endpoint" =~ $regex ]]; then region="${BASH_REMATCH[1]}" DP_log "Extract region from $endpoint-> $region" else DP_log "Failed to extract region from endpoint: $endpoint" fi elif [[ "$provider" == "Minio" || "$provider" == "RustFS" ]]; then export S3_FORCE_PATH_STYLE=true else echo "Unsupported provider: $provider" fi export S3_ACCESS_KEY="${access_key_id}" export S3_SECRET_KEY="${secret_access_key}" export S3_REGION="${region}" export S3_ENDPOINT="${endpoint}" export S3_BUCKET="${bucket}" export S3_PART_SIZE="${chunk_size}" export S3_PATH="${DP_BACKUP_BASE_PATH}" export INIT_CLUSTER_NAME="${INIT_CLUSTER_NAME:-default}" export RESTORE_SCHEMA_ON_CLUSTER="${INIT_CLUSTER_NAME}" export CLICKHOUSE_HOST="${DP_DB_HOST}" export CLICKHOUSE_USERNAME="${CLICKHOUSE_ADMIN_USER}" export CLICKHOUSE_PASSWORD="${CLICKHOUSE_ADMIN_PASSWORD}" if [[ "${TLS_ENABLED:-false}" == "true" ]]; then export CLICKHOUSE_SECURE=true export CLICKHOUSE_PORT="${CLICKHOUSE_TCP_SECURE_PORT:-9440}" export CLICKHOUSE_TLS_CA="/etc/pki/tls/ca.pem" export CLICKHOUSE_TLS_CERT="/etc/pki/tls/cert.pem" export CLICKHOUSE_TLS_KEY="/etc/pki/tls/key.pem" export CLICKHOUSE_SKIP_VERIFY=true fi DP_log "Dynamic environment variables for clickhouse-backup have been set." } function ch_query() { local query="$1" local ch_port="${CLICKHOUSE_PORT:-9000}" local ch_args=(--user "${CLICKHOUSE_USERNAME}" --password "${CLICKHOUSE_PASSWORD}" --host "${CLICKHOUSE_HOST}" --port "$ch_port" --connect_timeout=5) clickhouse-client "${ch_args[@]}" --query "$query" } function download_backup() { local backup_name="$1" clickhouse-backup download "$backup_name" || { DP_error_log "Failed to download backup '$backup_name'" return 1 } DP_log "Downloading backup '$backup_name' from remote storage..." return 0 } function fetch_backup() { local backup_name=$1 if clickhouse-backup list local | grep -q "$backup_name"; then DP_log "Local backup '$backup_name' found." else DP_log "Local backup '$backup_name' not found. Downloading..." download_backup "$backup_name" || { DP_error_log "Failed to download backup '$backup_name'. Exiting." exit 1 } clickhouse-backup list local | grep -q "$backup_name" || { DP_error_log "Backup '$backup_name' not found after download. Exiting." exit 1 } fi DP_log "Backup '$backup_name' is available locally." } function delete_backups_except() { local latest_backup=$1 DP_log "delete backup except $latest_backup" backup_list=$(clickhouse-backup list) echo "$backup_list" | awk '/local/ {print $1}' | while IFS= read -r backup_name; do if [ "$backup_name" != "$latest_backup" ]; then clickhouse-backup delete local "$backup_name" || { DP_error_log "Clickhouse-backup delete local backup $backup_name FAILED" } fi done } # Save backup size info for DP status reporting function save_backup_size() { local shard_base_dir shard_base_dir=$(dirname "${DP_BACKUP_BASE_PATH}") export DATASAFED_BACKEND_BASE_PATH="$shard_base_dir" export PATH="$PATH:$DP_DATASAFED_BIN_PATH" local backup_size backup_size=$(datasafed stat / | grep TotalSize | awk '{print $2}') DP_save_backup_status_info "$backup_size" } # Restore schema and wait for sync across shards function restore_schema_and_sync() { local backup_name="$1" local mode_info="$2" local schema_db="kubeblocks" local schema_table="__restore_ready__" local timeout="${RESTORE_SCHEMA_READY_TIMEOUT_SECONDS:-1800}" local interval="${RESTORE_SCHEMA_READY_CHECK_INTERVAL_SECONDS:-5}" # Determine if this pod should execute schema restore local should_restore_schema=false if [[ "$mode_info" == "standalone" ]]; then should_restore_schema=true else local first_component="${mode_info#cluster:}" [[ "${CURRENT_SHARD_COMPONENT_SHORT_NAME}" == "$first_component" ]] && should_restore_schema=true fi if [[ "$should_restore_schema" == "true" ]]; then # Standalone: unset ON CLUSTER mode to avoid ZK requirement [[ "$mode_info" == "standalone" ]] && unset RESTORE_SCHEMA_ON_CLUSTER clickhouse-backup restore_remote "$backup_name" --schema --rbac || { DP_error_log "Clickhouse-backup restore_remote backup $backup_name FAILED" return 1 } # Cluster mode: create marker table for cross-shard coordination if [[ "$mode_info" != "standalone" ]]; then ch_query "CREATE DATABASE IF NOT EXISTS \`${schema_db}\` ON CLUSTER \`${INIT_CLUSTER_NAME}\`" || { DP_error_log "Failed to create database ${schema_db}" return 1 } ch_query "CREATE TABLE IF NOT EXISTS \`${schema_db}\`.\`${schema_table}\` ON CLUSTER \`${INIT_CLUSTER_NAME}\` (shard String, finished_at DateTime, backup_name String) ENGINE=TinyLog" || { DP_error_log "Failed to create schema ready marker" return 1 } fi else DP_log "Waiting for schema ready table on ${CLICKHOUSE_HOST}..." local start=$(date +%s) while true; do if [[ "$(ch_query "EXISTS TABLE \`${schema_db}\`.\`${schema_table}\`")" == "1" ]]; then break fi local now=$(date +%s) if [[ $((now - start)) -ge $timeout ]]; then DP_error_log "Timeout waiting for schema ready table on ${CLICKHOUSE_HOST}" return 1 fi sleep "$interval" done fi } # Full restore: schema + data + marker function do_restore() { local backup_name="$1" local mode_info="$2" local schema_db="kubeblocks" local schema_table="__restore_ready__" # Restore schema (first shard in cluster uses ON CLUSTER DDL) restore_schema_and_sync "$backup_name" "$mode_info" || return 1 # Restore data clickhouse-backup restore_remote "$backup_name" --data || { DP_error_log "Clickhouse-backup restore_remote --data FAILED" return 1 } # Insert shard ready marker (cluster mode only) if [[ "$mode_info" != "standalone" ]]; then ch_query "INSERT INTO \`${schema_db}\`.\`${schema_table}\` (shard, finished_at, backup_name) VALUES ('${CURRENT_SHARD_COMPONENT_SHORT_NAME}', now(), '$backup_name')" || { DP_error_log "Failed to insert shard ready marker" return 1 } fi } #!/bin/bash set -exo pipefail # Supports: standalone (single node) and cluster (multi-shard) topologies # Strategy: first shard restores schema with ON CLUSTER, others wait for sync trap handle_exit EXIT generate_backup_config set_clickhouse_backup_config_env if [[ "${CLICKHOUSE_SECURE}" = "true" ]]; then DP_error_log "ClickHouse restore does not support TLS" exit 1 fi # 1. Detect topology mode: standalone (no ':' in FQDN) or cluster first_entry="${ALL_COMBINED_SHARDS_POD_FQDN_LIST%%,*}" first_component="${first_entry%%:*}" if [[ -z "$first_component" ]]; then DP_error_log "Invalid ALL_COMBINED_SHARDS_POD_FQDN_LIST" exit 1 fi if [[ "$first_component" == "$first_entry" ]]; then mode_info="standalone" DP_log "Standalone mode detected" else mode_info="cluster:$first_component" fi # 2. Restore schema + data + marker do_restore "${DP_BACKUP_NAME}" "$mode_info" || exit 1 # 3. Cleanup local backups delete_backups_except "" State: Terminated Reason: Error Exit Code: 1 Started: Wed, 11 Feb 2026 19:14:04 +0800 Finished: Wed, 11 Feb 2026 19:15:11 +0800 Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-icopne-backup-clickhouse-ngn-env ConfigMap Optional: false Environment: DP_BACKUP_NAME: backup-ns-uwpgk-clkhouse-icopne-20260211191142 DP_TARGET_RELATIVE_PATH: clickhouse-7gx DP_BACKUP_ROOT_PATH: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse DP_BACKUP_BASE_PATH: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142/clickhouse-7gx DP_BACKUP_STOP_TIME: 2026-02-11T11:12:05Z RESTORE_SCHEMA_READY_TIMEOUT_SECONDS: 1800 RESTORE_SCHEMA_READY_CHECK_INTERVAL_SECONDS: 5 CLICKHOUSE_ADMIN_PASSWORD: Optional: false CURRENT_POD_NAME: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn (v1:metadata.name) INIT_CLUSTER_NAME: default DP_DB_USER: Optional: false DP_DB_PASSWORD: Optional: false DP_DB_PORT: 8001 DP_DB_HOST: clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless DP_DATASAFED_BIN_PATH: /bin/datasafed Mounts: /bin/datasafed from dp-datasafed-bin (rw) /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /etc/datasafed from dp-datasafed-config (ro) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42mtd (ro) restore-manager: Container ID: containerd://55c830348418ba810971214efad20c6a10a6d1e0f50a20597278d0b3167ddc3e Image: docker.io/apecloud/kubeblocks-tools:1.0.2 Image ID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea Port: Host Port: Command: sh -c Args: set -o errexit set -o nounset sleep_seconds="1" signal_file="/dp_downward/stop_restore_manager" if [ "$sleep_seconds" -le 0 ]; then sleep_seconds=2 fi while true; do if [ -f "$signal_file" ] && [ "$(cat "$signal_file")" = "true" ]; then break fi echo "waiting for other restore workloads, sleep ${sleep_seconds}s" sleep "$sleep_seconds" done echo "restore manager stopped" State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 11 Feb 2026 19:14:05 +0800 Finished: Wed, 11 Feb 2026 19:15:13 +0800 Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment: DP_DATASAFED_BIN_PATH: /bin/datasafed Mounts: /bin/datasafed from dp-datasafed-bin (rw) /dp_downward from downward-volume-sidecard (rw) /etc/datasafed from dp-datasafed-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42mtd (ro) Conditions: Type Status PodReadyToStartContainers False Initialized True Ready False ContainersReady False PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-clkhouse-icopne-backup-clickhouse-ngn-0 ReadOnly: false client-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-icopne-backup-clickhouse-ngn-clickhouse-client-tpl Optional: false config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-icopne-backup-clickhouse-ngn-clickhouse-tpl Optional: false downward-volume-sidecard: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations['dataprotection.kubeblocks.io/stop-restore-manager'] -> stop_restore_manager dp-datasafed-config: Type: Secret (a volume populated by a Secret) SecretName: tool-config-backuprepo-kbcli-test-88dtkr Optional: false dp-datasafed-bin: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kube-api-access-42mtd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=aks-cicdamdpool-55976491-vmss000001 Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m15s default-scheduler Successfully assigned ns-uwpgk/restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn to aks-cicdamdpool-55976491-vmss000001 Normal Pulled 6m15s kubelet Container image "docker.io/apecloud/datasafed:0.2.3" already present on machine Normal Created 6m15s kubelet Created container: dp-copy-datasafed Normal Started 6m15s kubelet Started container dp-copy-datasafed Normal Pulling 6m14s kubelet Pulling image "docker.io/apecloud/clickhouse-backup-full:2.6.42" Normal Pulled 6m2s kubelet Successfully pulled image "docker.io/apecloud/clickhouse-backup-full:2.6.42" in 12.166s (12.166s including waiting). Image size: 344196628 bytes. Normal Created 6m2s kubelet Created container: restore Normal Started 6m2s kubelet Started container restore Normal Pulled 6m2s kubelet Container image "docker.io/apecloud/kubeblocks-tools:1.0.2" already present on machine Normal Created 6m2s kubelet Created container: restore-manager Normal Started 6m1s kubelet Started container restore-manager Warning FailedToRetrieveImagePullSecret 4m54s (x5 over 6m15s) kubelet Unable to retrieve some image pull secrets (kbcli-test-registry-key); attempting to pull the image may not succeed. ------------------------------------------------------------------------------------------------------------------  `kubectl describe pod restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 --namespace ns-uwpgk `(B  Name: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 Namespace: ns-uwpgk Priority: 0 Service Account: kubeblocks-dataprotection-worker Node: aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Start Time: Wed, 11 Feb 2026 19:15:23 +0800 Labels: app.kubernetes.io/managed-by=kubeblocks-dataprotection batch.kubernetes.io/controller-uid=b0539470-d058-40af-94c5-e989ce4bf597 batch.kubernetes.io/job-name=restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 controller-uid=b0539470-d058-40af-94c5-e989ce4bf597 dataprotection.kubeblocks.io/restore=clkhouse-icopne-backup-clickhouse-ngn-5362cf29-postready job-name=restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 Annotations: dataprotection.kubeblocks.io/stop-restore-manager: true Status: Failed IP: 10.244.4.223 IPs: IP: 10.244.4.223 Controlled By: Job/restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-2-0-0 Init Containers: dp-copy-datasafed: Container ID: containerd://c78f77a980f4096032568f89ce210705a5ad4279029b33c55c3d013274242398 Image: docker.io/apecloud/datasafed:0.2.3 Image ID: docker.io/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f Port: Host Port: Command: /bin/sh -c /scripts/install-datasafed.sh /bin/datasafed State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 11 Feb 2026 19:15:23 +0800 Finished: Wed, 11 Feb 2026 19:15:23 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment: Mounts: /bin/datasafed from dp-datasafed-bin (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwzps (ro) Containers: restore: Container ID: containerd://ab18e7551cab12af38b78e7b84a7b4e7155801885ea5c2a93b369a72f1a316ab Image: docker.io/apecloud/clickhouse-backup-full:2.6.42 Image ID: docker.io/apecloud/clickhouse-backup-full@sha256:0dedf050bf78f889c2d6ed7120aae4df927c7816a72863ac017aba49c072af4e Port: Host Port: Command: bash -c #!/bin/bash # log info file function DP_log() { msg=$1 local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S') echo "${curr_date} INFO: $msg" } # log error info function DP_error_log() { msg=$1 local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S') echo "${curr_date} ERROR: $msg" } # Get file names without extensions based on the incoming file path function DP_get_file_name_without_ext() { local fileName=$1 local file_without_ext=${fileName%.*} echo $(basename ${file_without_ext}) } # Save backup status info file for syncing progress. # timeFormat: %Y-%m-%dT%H:%M:%SZ function DP_save_backup_status_info() { local totalSize=$1 local startTime=$2 local stopTime=$3 local timeZone=$4 local extras=$5 local timeZoneStr="" if [ ! -z ${timeZone} ]; then timeZoneStr=",\"timeZone\":\"${timeZone}\"" fi if [ -z "${stopTime}" ]; then echo "{\"totalSize\":\"${totalSize}\"}" >${DP_BACKUP_INFO_FILE} elif [ -z "${startTime}" ]; then echo "{\"totalSize\":\"${totalSize}\",\"extras\":[${extras}],\"timeRange\":{\"end\":\"${stopTime}\"${timeZoneStr}}}" >${DP_BACKUP_INFO_FILE} else echo "{\"totalSize\":\"${totalSize}\",\"extras\":[${extras}],\"timeRange\":{\"start\":\"${startTime}\",\"end\":\"${stopTime}\"${timeZoneStr}}}" >${DP_BACKUP_INFO_FILE} fi } # Clean up expired logfiles. # Default interval is 60s # Default rootPath is / function DP_purge_expired_files() { local currentUnix="${1:?missing current unix}" local last_purge_time="${2:?missing last_purge_time}" local root_path=${3:-"/"} local interval_seconds=${4:-60} local diff_time=$((${currentUnix} - ${last_purge_time})) if [[ -z ${DP_TTL_SECONDS} || ${diff_time} -lt ${interval_seconds} ]]; then return fi expiredUnix=$((${currentUnix} - ${DP_TTL_SECONDS})) files=$(datasafed list -f --recursive --older-than ${expiredUnix} ${root_path}) for file in "${files[@]}"; do datasafed rm "$file" echo "$file" done } # analyze the start time of the earliest file from the datasafed backend. # Then record the file name into dp_oldest_file.info. # If the oldest file is no changed, exit the process. # This can save traffic consumption. function DP_analyze_start_time_from_datasafed() { local oldest_file="${1:?missing oldest file}" local get_start_time_from_file="${2:?missing get_start_time_from_file function}" local datasafed_pull="${3:?missing datasafed_pull function}" local info_file="${KB_BACKUP_WORKDIR}/dp_oldest_file.info" mkdir -p ${KB_BACKUP_WORKDIR} && cd ${KB_BACKUP_WORKDIR} if [ -f ${info_file} ]; then last_oldest_file=$(cat ${info_file}) last_oldest_file_name=$(DP_get_file_name_without_ext ${last_oldest_file}) if [ "$last_oldest_file" == "${oldest_file}" ]; then # oldest file no changed. ${get_start_time_from_file} $last_oldest_file_name return fi # remove last oldest file if [ -f ${last_oldest_file_name} ]; then rm -rf ${last_oldest_file_name} fi fi # pull file ${datasafed_pull} ${oldest_file} # record last oldest file echo ${oldest_file} >${info_file} oldest_file_name=$(DP_get_file_name_without_ext ${oldest_file}) ${get_start_time_from_file} ${oldest_file_name} } # get the timeZone offset for location, such as Asia/Shanghai function getTimeZoneOffset() { local timeZone=${1:?missing time zone} if [[ $timeZone == "+"* ]] || [[ $timeZone == "-"* ]]; then echo ${timeZone} return fi local currTime=$(TZ=UTC date) local utcHour=$(TZ=UTC date -d "${currTime}" +"%H") local zoneHour=$(TZ=${timeZone} date -d "${currTime}" +"%H") local offset=$((${zoneHour} - ${utcHour})) if [ $offset -eq 0 ]; then return fi symbol="+" if [ $offset -lt 0 ]; then symbol="-" && offset=${offset:1} fi if [ $offset -lt 10 ]; then offset="0${offset}" fi echo "${symbol}${offset}:00" } # if the script exits with a non-zero exit code, touch a file to indicate that the backup failed, # the sync progress container will check this file and exit if it exists function handle_exit() { exit_code=$? if [ "$exit_code" -ne 0 ]; then DP_error_log "Backup failed with exit code $exit_code" touch "${DP_BACKUP_INFO_FILE}.exit" exit 1 fi } function generate_backup_config() { clickhouse_backup_config=$(mktemp) || { DP_error_log "Failed to create temporary file" return 1 } # whole config see https://github.com/Altinity/clickhouse-backup cat >"$clickhouse_backup_config" <<'EOF' general: remote_storage: s3 # REMOTE_STORAGE, choice from: `azblob`,`gcs`,`s3`, etc; if `none` then `upload` and `download` commands will fail. max_file_size: 1125899906842624 # MAX_FILE_SIZE, 1PB by default, useless when upload_by_part is true, use to split data parts files by archives backups_to_keep_local: 0 # BACKUPS_TO_KEEP_LOCAL, how many latest local backup should be kept, 0 means all created backups will be stored on local disk, -1 means backup will keep after `create` but will delete after `create_remote` command backups_to_keep_remote: 0 # BACKUPS_TO_KEEP_REMOTE, how many latest backup should be kept on remote storage, 0 means all uploaded backups will be stored on remote storage. log_level: info # LOG_LEVEL, a choice from `debug`, `info`, `warning`, `error` allow_empty_backups: true # ALLOW_EMPTY_BACKUPS download_concurrency: 1 # DOWNLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES / 2)) upload_concurrency: 1 # UPLOAD_CONCURRENCY, max 255, by default, the value is round(sqrt(AVAILABLE_CPU_CORES / 2)) download_max_bytes_per_second: 0 # DOWNLOAD_MAX_BYTES_PER_SECOND, 0 means no throttling upload_max_bytes_per_second: 0 # UPLOAD_MAX_BYTES_PER_SECOND, 0 means no throttling object_disk_server_side_copy_concurrency: 32 allow_object_disk_streaming: false # restore schema on cluster is alway run by `INIT_CLUSTER_NAME` cluster of clickhouse, when schema restore, the ddl only runs on first pod of first shard restore_schema_on_cluster: "" # RESTORE_SCHEMA_ON_CLUSTER, execute all schema related SQL queries with `ON CLUSTER` clause as Distributed DDL. This isn't applicable when `use_embedded_backup_restore: true` upload_by_part: true # UPLOAD_BY_PART download_by_part: true # DOWNLOAD_BY_PART use_resumable_state: true # USE_RESUMABLE_STATE, allow resume upload and download according to the .resumable file. Resumable state is not supported for custom method in remote storage. restore_database_mapping: {} # RESTORE_DATABASE_MAPPING, like "src_db1:target_db1,src_db2:target_db2", restore rules from backup databases to target databases, which is useful when changing destination database, all atomic tables will be created with new UUIDs. restore_table_mapping: {} # RESTORE_TABLE_MAPPING, like "src_table1:target_table1,src_table2:target_table2" restore rules from backup tables to target tables, which is useful when changing destination tables. retries_on_failure: 3 # RETRIES_ON_FAILURE, how many times to retry after a failure during upload or download retries_pause: 5s # RETRIES_PAUSE, duration time to pause after each download or upload failure watch_interval: 1h # WATCH_INTERVAL, use only for `watch` command, backup will create every 1h full_interval: 24h # FULL_INTERVAL, use only for `watch` command, full backup will create every 24h watch_backup_name_template: "shard{shard}-{type}-{time:20060102150405}" # WATCH_BACKUP_NAME_TEMPLATE, used only for `watch` command, macros values will apply from `system.macros` for time:XXX, look format in https://go.dev/src/time/format.go sharded_operation_mode: none # SHARDED_OPERATION_MODE, how different replicas will shard backing up data for tables. Options are: none (no sharding), table (table granularity), database (database granularity), first-replica (on the lexicographically sorted first active replica). If left empty, then the "none" option will be set as default. cpu_nice_priority: 15 # CPU niceness priority, to allow throttling CPU intensive operation, more details https://manpages.ubuntu.com/manpages/xenial/man1/nice.1.html io_nice_priority: "idle" # IO niceness priority, to allow throttling DISK intensive operation, more details https://manpages.ubuntu.com/manpages/xenial/man1/ionice.1.html rbac_backup_always: true # always, backup RBAC objects rbac_resolve_conflicts: "recreate" # action, when RBAC object with the same name already exists, allow "recreate", "ignore", "fail" values clickhouse: username: default # CLICKHOUSE_USERNAME password: "" # CLICKHOUSE_PASSWORD host: localhost # CLICKHOUSE_HOST, To make backup data `clickhouse-backup` requires access to the same file system as clickhouse-server, so `host` should localhost or address of another docker container on the same machine, or IP address bound to some network interface on the same host. port: 9000 # CLICKHOUSE_PORT, don't use 8123, clickhouse-backup doesn't support HTTP protocol disk_mapping: {} # CLICKHOUSE_DISK_MAPPING, use this mapping when your `system.disks` are different between the source and destination clusters during backup and restore process. The format for this env variable is "disk_name1:disk_path1,disk_name2:disk_path2". For YAML please continue using map syntax. If destination disk is different from source backup disk then you need to specify the destination disk in the config file: disk_mapping: disk_destination: /var/lib/clickhouse/disks/destination `disk_destination` needs to be referenced in backup (source config), and all names from this map (`disk:path`) shall exist in `system.disks` on destination server. During download of the backup from remote location (s3), if `name` is not present in `disk_mapping` (on the destination server config too) then `default` disk path will used for download. `disk_mapping` is used to understand during download where downloaded parts shall be unpacked (which disk) on destination server and where to search for data parts directories during restore. skip_tables: # CLICKHOUSE_SKIP_TABLES, the list of tables (pattern are allowed) which are ignored during backup and restore process The format for this env variable is "pattern1,pattern2,pattern3". For YAML please continue using list syntax - system.* - INFORMATION_SCHEMA.* - information_schema.* skip_table_engines: [] # CLICKHOUSE_SKIP_TABLE_ENGINES, the list of tables engines which are ignored during backup, upload, download, restore process The format for this env variable is "Engine1,Engine2,engine3". For YAML please continue using list syntax skip_disks: [] # CLICKHOUSE_SKIP_DISKS, list of disk names which are ignored during create, upload, download and restore command The format for this env variable is "Engine1,Engine2,engine3". For YAML please continue using list syntax skip_disk_types: [] # CLICKHOUSE_SKIP_DISK_TYPES, list of disk types which are ignored during create, upload, download and restore command The format for this env variable is "Engine1,Engine2,engine3". For YAML please continue using list syntax timeout: 5m # CLICKHOUSE_TIMEOUT freeze_by_part: false # CLICKHOUSE_FREEZE_BY_PART, allow freezing by part instead of freezing the whole table freeze_by_part_where: "" # CLICKHOUSE_FREEZE_BY_PART_WHERE, allow parts filtering during freezing when freeze_by_part: true secure: false # CLICKHOUSE_SECURE, use TLS encryption for connection skip_verify: false # CLICKHOUSE_SKIP_VERIFY, skip certificate verification and allow potential certificate warnings sync_replicated_tables: true # CLICKHOUSE_SYNC_REPLICATED_TABLES tls_key: "" # CLICKHOUSE_TLS_KEY, filename with TLS key file tls_cert: "" # CLICKHOUSE_TLS_CERT, filename with TLS certificate file tls_ca: "" # CLICKHOUSE_TLS_CA, filename with TLS custom authority file log_sql_queries: true # CLICKHOUSE_LOG_SQL_QUERIES, logging `clickhouse-backup` SQL queries on `info` level, when true, `debug` level when false debug: false # CLICKHOUSE_DEBUG config_dir: "/opt/bitnami/clickhouse/etc" # CLICKHOUSE_CONFIG_DIR restart_command: "sql:SYSTEM SHUTDOWN" # CLICKHOUSE_RESTART_COMMAND, use this command when restoring with --rbac, --rbac-only or --configs, --configs-only options will split command by ; and execute one by one, all errors will logged and ignore available prefixes - sql: will execute SQL query - exec: will execute command via shell ignore_not_exists_error_during_freeze: true # CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE, helps to avoid backup failures when running frequent CREATE / DROP tables and databases during backup, `clickhouse-backup` will ignore `code: 60` and `code: 81` errors during execution of `ALTER TABLE ... FREEZE` check_replicas_before_attach: true # CLICKHOUSE_CHECK_REPLICAS_BEFORE_ATTACH, helps avoiding concurrent ATTACH PART execution when restoring ReplicatedMergeTree tables default_replica_path: "/clickhouse/tables/{layer}/{shard}/{database}/{table}" # CLICKHOUSE_DEFAULT_REPLICA_PATH, will use during restore Replicated tables without macros in replication_path if replica already exists, to avoid restoring conflicts default_replica_name: "{replica}" # CLICKHOUSE_DEFAULT_REPLICA_NAME, will use during restore Replicated tables without macros in replica_name if replica already exists, to avoid restoring conflicts use_embedded_backup_restore: false # CLICKHOUSE_USE_EMBEDDED_BACKUP_RESTORE, use BACKUP / RESTORE SQL statements instead of regular SQL queries to use features of modern ClickHouse server versions embedded_backup_disk: "" # CLICKHOUSE_EMBEDDED_BACKUP_DISK - disk from system.disks which will use when `use_embedded_backup_restore: true` backup_mutations: true # CLICKHOUSE_BACKUP_MUTATIONS, allow backup mutations from system.mutations WHERE is_done=0 and apply it during restore restore_as_attach: false # CLICKHOUSE_RESTORE_AS_ATTACH, allow restore tables which have inconsistent data parts structure and mutations in progress check_parts_columns: true # CLICKHOUSE_CHECK_PARTS_COLUMNS, check data types from system.parts_columns during create backup to guarantee mutation is complete max_connections: 0 # CLICKHOUSE_MAX_CONNECTIONS, how many parallel connections could be opened during operations s3: access_key: "" # S3_ACCESS_KEY secret_key: "" # S3_SECRET_KEY bucket: "" # S3_BUCKET endpoint: "" # S3_ENDPOINT region: us-east-1 # S3_REGION acl: private # S3_ACL, AWS changed S3 defaults in April 2023 so that all new buckets have ACL disabled: https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/ They also recommend that ACLs are disabled: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ensure-object-ownership.html use `acl: ""` if you see "api error AccessControlListNotSupported: The bucket does not allow ACLs" assume_role_arn: "" # S3_ASSUME_ROLE_ARN force_path_style: false # S3_FORCE_PATH_STYLE path: "" # S3_PATH, `system.macros` values can be applied as {macro_name} object_disk_path: "" # S3_OBJECT_DISK_PATH, path for backup of part from clickhouse object disks, if object disks present in clickhouse, then shall not be zero and shall not be prefixed by `path` disable_ssl: false # S3_DISABLE_SSL compression_level: 1 # S3_COMPRESSION_LEVEL compression_format: tar # S3_COMPRESSION_FORMAT, allowed values tar, lz4, bzip2, gzip, sz, xz, brortli, zstd, `none` for upload data part folders as is look at details in https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html sse: "" # S3_SSE, empty (default), AES256, or aws:kms sse_customer_algorithm: "" # S3_SSE_CUSTOMER_ALGORITHM, encryption algorithm, for example, AES256 sse_customer_key: "" # S3_SSE_CUSTOMER_KEY, customer-provided encryption key use `openssl rand 32 > aws_sse.key` and `cat aws_sse.key | base64` sse_customer_key_md5: "" # S3_SSE_CUSTOMER_KEY_MD5, 128-bit MD5 digest of the encryption key according to RFC 1321 use `cat aws_sse.key | openssl dgst -md5 -binary | base64` sse_kms_key_id: "" # S3_SSE_KMS_KEY_ID, if S3_SSE is aws:kms then specifies the ID of the Amazon Web Services Key Management Service sse_kms_encryption_context: "" # S3_SSE_KMS_ENCRYPTION_CONTEXT, base64-encoded UTF-8 string holding a JSON with the encryption context Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. This is a collection of non-secret key-value pairs that represent additional authenticated data. When you use an encryption context to encrypt data, you must specify the same (an exact case-sensitive match) encryption context to decrypt the data. An encryption context is supported only on operations with symmetric encryption KMS keys disable_cert_verification: false # S3_DISABLE_CERT_VERIFICATION use_custom_storage_class: false # S3_USE_CUSTOM_STORAGE_CLASS storage_class: STANDARD # S3_STORAGE_CLASS, by default allow only from list https://github.com/aws/aws-sdk-go-v2/blob/main/service/s3/types/enums.go#L787-L799 concurrency: 1 # S3_CONCURRENCY max_parts_count: 4000 # S3_MAX_PARTS_COUNT, number of parts for S3 multipart uploads allow_multipart_download: false # S3_ALLOW_MULTIPART_DOWNLOAD, allow faster multipart download speed, but will require additional disk space, download_concurrency * part size in worst case checksum_algorithm: "" # S3_CHECKSUM_ALGORITHM, use it when you use object lock which allow to avoid delete keys from bucket until some timeout after creation, use CRC32 as fastest object_labels: {} # S3_OBJECT_LABELS, allow setup metadata for each object during upload, use {macro_name} from system.macros and {backupName} for current backup name The format for this env variable is "key1:value1,key2:value2". For YAML please continue using map syntax custom_storage_class_map: {} # S3_CUSTOM_STORAGE_CLASS_MAP, allow setup storage class depending on the backup name regexp pattern, format nameRegexp > className request_payer: "" # S3_REQUEST_PAYER, define who will pay to request, look https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html for details, possible values requester, if empty then bucket owner debug: false # S3_DEBUG EOF export CLICKHOUSE_BACKUP_CONFIG="$clickhouse_backup_config" } function getToolConfigValue() { local var=$1 cat "$toolConfig" | grep "$var" | awk '{print $NF}' } function set_clickhouse_backup_config_env() { toolConfig=/etc/datasafed/datasafed.conf if [ ! -f ${toolConfig} ]; then DP_error_log "Config file not found: ${toolConfig}" exit 1 fi local provider="" local access_key_id="" local secret_access_key="" local region="" local endpoint="" local bucket="" IFS=$'\n' for line in $(cat ${toolConfig}); do line=$(eval echo $line) if [[ $line == "access_key_id"* ]]; then access_key_id=$(getToolConfigValue "$line") elif [[ $line == "secret_access_key"* ]]; then secret_access_key=$(getToolConfigValue "$line") elif [[ $line == "region"* ]]; then region=$(getToolConfigValue "$line") elif [[ $line == "endpoint"* ]]; then endpoint=$(getToolConfigValue "$line") elif [[ $line == "root"* ]]; then bucket=$(getToolConfigValue "$line") elif [[ $line == "chunk_size"* ]]; then chunk_size=$(getToolConfigValue "$line") elif [[ $line == "provider"* ]]; then provider=$(getToolConfigValue "$line") fi done if [[ ! $endpoint =~ ^https?:// ]]; then endpoint="https://${endpoint}" fi if [[ "$provider" == "Alibaba" ]]; then regex='https?:\/\/oss-(.*?)\.aliyuncs\.com' if [[ "$endpoint" =~ $regex ]]; then region="${BASH_REMATCH[1]}" DP_log "Extract region from $endpoint-> $region" else DP_log "Failed to extract region from endpoint: $endpoint" fi elif [[ "$provider" == "TencentCOS" ]]; then regex='https?:\/\/cos\.(.*?)\.myqcloud\.com' if [[ "$endpoint" =~ $regex ]]; then region="${BASH_REMATCH[1]}" DP_log "Extract region from $endpoint-> $region" else DP_log "Failed to extract region from endpoint: $endpoint" fi elif [[ "$provider" == "Minio" || "$provider" == "RustFS" ]]; then export S3_FORCE_PATH_STYLE=true else echo "Unsupported provider: $provider" fi export S3_ACCESS_KEY="${access_key_id}" export S3_SECRET_KEY="${secret_access_key}" export S3_REGION="${region}" export S3_ENDPOINT="${endpoint}" export S3_BUCKET="${bucket}" export S3_PART_SIZE="${chunk_size}" export S3_PATH="${DP_BACKUP_BASE_PATH}" export INIT_CLUSTER_NAME="${INIT_CLUSTER_NAME:-default}" export RESTORE_SCHEMA_ON_CLUSTER="${INIT_CLUSTER_NAME}" export CLICKHOUSE_HOST="${DP_DB_HOST}" export CLICKHOUSE_USERNAME="${CLICKHOUSE_ADMIN_USER}" export CLICKHOUSE_PASSWORD="${CLICKHOUSE_ADMIN_PASSWORD}" if [[ "${TLS_ENABLED:-false}" == "true" ]]; then export CLICKHOUSE_SECURE=true export CLICKHOUSE_PORT="${CLICKHOUSE_TCP_SECURE_PORT:-9440}" export CLICKHOUSE_TLS_CA="/etc/pki/tls/ca.pem" export CLICKHOUSE_TLS_CERT="/etc/pki/tls/cert.pem" export CLICKHOUSE_TLS_KEY="/etc/pki/tls/key.pem" export CLICKHOUSE_SKIP_VERIFY=true fi DP_log "Dynamic environment variables for clickhouse-backup have been set." } function ch_query() { local query="$1" local ch_port="${CLICKHOUSE_PORT:-9000}" local ch_args=(--user "${CLICKHOUSE_USERNAME}" --password "${CLICKHOUSE_PASSWORD}" --host "${CLICKHOUSE_HOST}" --port "$ch_port" --connect_timeout=5) clickhouse-client "${ch_args[@]}" --query "$query" } function download_backup() { local backup_name="$1" clickhouse-backup download "$backup_name" || { DP_error_log "Failed to download backup '$backup_name'" return 1 } DP_log "Downloading backup '$backup_name' from remote storage..." return 0 } function fetch_backup() { local backup_name=$1 if clickhouse-backup list local | grep -q "$backup_name"; then DP_log "Local backup '$backup_name' found." else DP_log "Local backup '$backup_name' not found. Downloading..." download_backup "$backup_name" || { DP_error_log "Failed to download backup '$backup_name'. Exiting." exit 1 } clickhouse-backup list local | grep -q "$backup_name" || { DP_error_log "Backup '$backup_name' not found after download. Exiting." exit 1 } fi DP_log "Backup '$backup_name' is available locally." } function delete_backups_except() { local latest_backup=$1 DP_log "delete backup except $latest_backup" backup_list=$(clickhouse-backup list) echo "$backup_list" | awk '/local/ {print $1}' | while IFS= read -r backup_name; do if [ "$backup_name" != "$latest_backup" ]; then clickhouse-backup delete local "$backup_name" || { DP_error_log "Clickhouse-backup delete local backup $backup_name FAILED" } fi done } # Save backup size info for DP status reporting function save_backup_size() { local shard_base_dir shard_base_dir=$(dirname "${DP_BACKUP_BASE_PATH}") export DATASAFED_BACKEND_BASE_PATH="$shard_base_dir" export PATH="$PATH:$DP_DATASAFED_BIN_PATH" local backup_size backup_size=$(datasafed stat / | grep TotalSize | awk '{print $2}') DP_save_backup_status_info "$backup_size" } # Restore schema and wait for sync across shards function restore_schema_and_sync() { local backup_name="$1" local mode_info="$2" local schema_db="kubeblocks" local schema_table="__restore_ready__" local timeout="${RESTORE_SCHEMA_READY_TIMEOUT_SECONDS:-1800}" local interval="${RESTORE_SCHEMA_READY_CHECK_INTERVAL_SECONDS:-5}" # Determine if this pod should execute schema restore local should_restore_schema=false if [[ "$mode_info" == "standalone" ]]; then should_restore_schema=true else local first_component="${mode_info#cluster:}" [[ "${CURRENT_SHARD_COMPONENT_SHORT_NAME}" == "$first_component" ]] && should_restore_schema=true fi if [[ "$should_restore_schema" == "true" ]]; then # Standalone: unset ON CLUSTER mode to avoid ZK requirement [[ "$mode_info" == "standalone" ]] && unset RESTORE_SCHEMA_ON_CLUSTER clickhouse-backup restore_remote "$backup_name" --schema --rbac || { DP_error_log "Clickhouse-backup restore_remote backup $backup_name FAILED" return 1 } # Cluster mode: create marker table for cross-shard coordination if [[ "$mode_info" != "standalone" ]]; then ch_query "CREATE DATABASE IF NOT EXISTS \`${schema_db}\` ON CLUSTER \`${INIT_CLUSTER_NAME}\`" || { DP_error_log "Failed to create database ${schema_db}" return 1 } ch_query "CREATE TABLE IF NOT EXISTS \`${schema_db}\`.\`${schema_table}\` ON CLUSTER \`${INIT_CLUSTER_NAME}\` (shard String, finished_at DateTime, backup_name String) ENGINE=TinyLog" || { DP_error_log "Failed to create schema ready marker" return 1 } fi else DP_log "Waiting for schema ready table on ${CLICKHOUSE_HOST}..." local start=$(date +%s) while true; do if [[ "$(ch_query "EXISTS TABLE \`${schema_db}\`.\`${schema_table}\`")" == "1" ]]; then break fi local now=$(date +%s) if [[ $((now - start)) -ge $timeout ]]; then DP_error_log "Timeout waiting for schema ready table on ${CLICKHOUSE_HOST}" return 1 fi sleep "$interval" done fi } # Full restore: schema + data + marker function do_restore() { local backup_name="$1" local mode_info="$2" local schema_db="kubeblocks" local schema_table="__restore_ready__" # Restore schema (first shard in cluster uses ON CLUSTER DDL) restore_schema_and_sync "$backup_name" "$mode_info" || return 1 # Restore data clickhouse-backup restore_remote "$backup_name" --data || { DP_error_log "Clickhouse-backup restore_remote --data FAILED" return 1 } # Insert shard ready marker (cluster mode only) if [[ "$mode_info" != "standalone" ]]; then ch_query "INSERT INTO \`${schema_db}\`.\`${schema_table}\` (shard, finished_at, backup_name) VALUES ('${CURRENT_SHARD_COMPONENT_SHORT_NAME}', now(), '$backup_name')" || { DP_error_log "Failed to insert shard ready marker" return 1 } fi } #!/bin/bash set -exo pipefail # Supports: standalone (single node) and cluster (multi-shard) topologies # Strategy: first shard restores schema with ON CLUSTER, others wait for sync trap handle_exit EXIT generate_backup_config set_clickhouse_backup_config_env if [[ "${CLICKHOUSE_SECURE}" = "true" ]]; then DP_error_log "ClickHouse restore does not support TLS" exit 1 fi # 1. Detect topology mode: standalone (no ':' in FQDN) or cluster first_entry="${ALL_COMBINED_SHARDS_POD_FQDN_LIST%%,*}" first_component="${first_entry%%:*}" if [[ -z "$first_component" ]]; then DP_error_log "Invalid ALL_COMBINED_SHARDS_POD_FQDN_LIST" exit 1 fi if [[ "$first_component" == "$first_entry" ]]; then mode_info="standalone" DP_log "Standalone mode detected" else mode_info="cluster:$first_component" fi # 2. Restore schema + data + marker do_restore "${DP_BACKUP_NAME}" "$mode_info" || exit 1 # 3. Cleanup local backups delete_backups_except "" State: Terminated Reason: Error Exit Code: 1 Started: Wed, 11 Feb 2026 19:15:25 +0800 Finished: Wed, 11 Feb 2026 19:15:25 +0800 Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: clkhouse-icopne-backup-clickhouse-ngn-env ConfigMap Optional: false Environment: DP_BACKUP_NAME: backup-ns-uwpgk-clkhouse-icopne-20260211191142 DP_TARGET_RELATIVE_PATH: clickhouse-7gx DP_BACKUP_ROOT_PATH: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse DP_BACKUP_BASE_PATH: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142/clickhouse-7gx DP_BACKUP_STOP_TIME: 2026-02-11T11:12:05Z RESTORE_SCHEMA_READY_TIMEOUT_SECONDS: 1800 RESTORE_SCHEMA_READY_CHECK_INTERVAL_SECONDS: 5 CLICKHOUSE_ADMIN_PASSWORD: Optional: false CURRENT_POD_NAME: restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 (v1:metadata.name) INIT_CLUSTER_NAME: default DP_DB_USER: Optional: false DP_DB_PASSWORD: Optional: false DP_DB_PORT: 8001 DP_DB_HOST: clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless DP_DATASAFED_BIN_PATH: /bin/datasafed Mounts: /bin/datasafed from dp-datasafed-bin (rw) /bitnami/clickhouse from data (rw) /etc/clickhouse-client from client-config (rw) /etc/datasafed from dp-datasafed-config (ro) /opt/bitnami/clickhouse/etc/conf.d from config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwzps (ro) restore-manager: Container ID: containerd://c2e5dffe215742912be09e59db1af965ed07040be91b4e1b98c0b1094c5eee48 Image: docker.io/apecloud/kubeblocks-tools:1.0.2 Image ID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea Port: Host Port: Command: sh -c Args: set -o errexit set -o nounset sleep_seconds="1" signal_file="/dp_downward/stop_restore_manager" if [ "$sleep_seconds" -le 0 ]; then sleep_seconds=2 fi while true; do if [ -f "$signal_file" ] && [ "$(cat "$signal_file")" = "true" ]; then break fi echo "waiting for other restore workloads, sleep ${sleep_seconds}s" sleep "$sleep_seconds" done echo "restore manager stopped" State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 11 Feb 2026 19:15:25 +0800 Finished: Wed, 11 Feb 2026 19:15:28 +0800 Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment: DP_DATASAFED_BIN_PATH: /bin/datasafed Mounts: /bin/datasafed from dp-datasafed-bin (rw) /dp_downward from downward-volume-sidecard (rw) /etc/datasafed from dp-datasafed-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwzps (ro) Conditions: Type Status PodReadyToStartContainers False Initialized True Ready False ContainersReady False PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-clkhouse-icopne-backup-clickhouse-ngn-0 ReadOnly: false client-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-icopne-backup-clickhouse-ngn-clickhouse-client-tpl Optional: false config: Type: ConfigMap (a volume populated by a ConfigMap) Name: clkhouse-icopne-backup-clickhouse-ngn-clickhouse-tpl Optional: false downward-volume-sidecard: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations['dataprotection.kubeblocks.io/stop-restore-manager'] -> stop_restore_manager dp-datasafed-config: Type: Secret (a volume populated by a Secret) SecretName: tool-config-backuprepo-kbcli-test-88dtkr Optional: false dp-datasafed-bin: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kube-api-access-xwzps: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=aks-cicdamdpool-55976491-vmss000001 Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m44s default-scheduler Successfully assigned ns-uwpgk/restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 to aks-cicdamdpool-55976491-vmss000001 Normal Pulled 4m44s kubelet Container image "docker.io/apecloud/datasafed:0.2.3" already present on machine Normal Created 4m44s kubelet Created container: dp-copy-datasafed Normal Started 4m44s kubelet Started container dp-copy-datasafed Normal Pulled 4m43s kubelet Container image "docker.io/apecloud/clickhouse-backup-full:2.6.42" already present on machine Normal Created 4m43s kubelet Created container: restore Normal Started 4m42s kubelet Started container restore Normal Pulled 4m42s kubelet Container image "docker.io/apecloud/kubeblocks-tools:1.0.2" already present on machine Normal Created 4m42s kubelet Created container: restore-manager Normal Started 4m42s kubelet Started container restore-manager Warning FailedToRetrieveImagePullSecret 4m41s (x5 over 4m44s) kubelet Unable to retrieve some image pull secrets (kbcli-test-registry-key); attempting to pull the image may not succeed. ------------------------------------------------------------------------------------------------------------------ --------------------------------------pod restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8--------------------------------------  `kubectl logs restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-wqncn --namespace ns-uwpgk --tail 500`(B  + trap handle_exit EXIT + generate_backup_config ++ mktemp + clickhouse_backup_config=/tmp/tmp.a3JBMox04q + cat + export CLICKHOUSE_BACKUP_CONFIG=/tmp/tmp.a3JBMox04q + CLICKHOUSE_BACKUP_CONFIG=/tmp/tmp.a3JBMox04q + set_clickhouse_backup_config_env + toolConfig=/etc/datasafed/datasafed.conf + '[' '!' -f /etc/datasafed/datasafed.conf ']' + local provider= + local access_key_id= + local secret_access_key= + local region= + local endpoint= + local bucket= + IFS=' ' ++ cat /etc/datasafed/datasafed.conf + for line in $(cat ${toolConfig}) ++ eval echo '[storage]' +++ echo '[storage]' + line='[storage]' + [[ [storage] == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ [storage] == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ [storage] == \r\e\g\i\o\n* ]] + [[ [storage] == \e\n\d\p\o\i\n\t* ]] + [[ [storage] == \r\o\o\t* ]] + [[ [storage] == \c\h\u\n\k\_\s\i\z\e* ]] + [[ [storage] == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'type = s3' +++ echo type = s3 + line='type = s3' + [[ type = s3 == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ type = s3 == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ type = s3 == \r\e\g\i\o\n* ]] + [[ type = s3 == \e\n\d\p\o\i\n\t* ]] + [[ type = s3 == \r\o\o\t* ]] + [[ type = s3 == \c\h\u\n\k\_\s\i\z\e* ]] + [[ type = s3 == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'provider = Minio' +++ echo provider = Minio + line='provider = Minio' + [[ provider = Minio == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ provider = Minio == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ provider = Minio == \r\e\g\i\o\n* ]] + [[ provider = Minio == \e\n\d\p\o\i\n\t* ]] + [[ provider = Minio == \r\o\o\t* ]] + [[ provider = Minio == \c\h\u\n\k\_\s\i\z\e* ]] + [[ provider = Minio == \p\r\o\v\i\d\e\r* ]] ++ getToolConfigValue 'provider = Minio' ++ local 'var=provider = Minio' ++ cat /etc/datasafed/datasafed.conf ++ grep 'provider = Minio' ++ awk '{print $NF}' + provider=Minio + for line in $(cat ${toolConfig}) ++ eval echo 'env_auth = false' +++ echo env_auth = false + line='env_auth = false' + [[ env_auth = false == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ env_auth = false == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ env_auth = false == \r\e\g\i\o\n* ]] + [[ env_auth = false == \e\n\d\p\o\i\n\t* ]] + [[ env_auth = false == \r\o\o\t* ]] + [[ env_auth = false == \c\h\u\n\k\_\s\i\z\e* ]] + [[ env_auth = false == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'access_key_id = kbclitest' +++ echo access_key_id = kbclitest + line='access_key_id = kbclitest' + [[ access_key_id = kbclitest == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] ++ getToolConfigValue 'access_key_id = kbclitest' ++ local 'var=access_key_id = kbclitest' ++ cat /etc/datasafed/datasafed.conf ++ grep 'access_key_id = kbclitest' ++ awk '{print $NF}' + access_key_id=kbclitest + for line in $(cat ${toolConfig}) ++ eval echo 'secret_access_key = kbclitest' +++ echo secret_access_key = kbclitest + line='secret_access_key = kbclitest' + [[ secret_access_key = kbclitest == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ secret_access_key = kbclitest == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] ++ getToolConfigValue 'secret_access_key = kbclitest' ++ local 'var=secret_access_key = kbclitest' ++ cat /etc/datasafed/datasafed.conf ++ grep 'secret_access_key = kbclitest' ++ awk '{print $NF}' + secret_access_key=kbclitest + for line in $(cat ${toolConfig}) ++ eval echo 'region = ' +++ echo region = + line='region =' + [[ region = == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ region = == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ region = == \r\e\g\i\o\n* ]] ++ getToolConfigValue 'region =' ++ local 'var=region =' ++ cat /etc/datasafed/datasafed.conf ++ grep 'region =' ++ awk '{print $NF}' + region== + for line in $(cat ${toolConfig}) ++ eval echo 'endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' +++ echo endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 + line='endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' + [[ endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 == \r\e\g\i\o\n* ]] + [[ endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 == \e\n\d\p\o\i\n\t* ]] ++ getToolConfigValue 'endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' ++ local 'var=endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' ++ cat /etc/datasafed/datasafed.conf ++ grep 'endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' ++ awk '{print $NF}' + endpoint=http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 + for line in $(cat ${toolConfig}) ++ eval echo 'root = kbcli-test' +++ echo root = kbcli-test + line='root = kbcli-test' + [[ root = kbcli-test == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ root = kbcli-test == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ root = kbcli-test == \r\e\g\i\o\n* ]] + [[ root = kbcli-test == \e\n\d\p\o\i\n\t* ]] + [[ root = kbcli-test == \r\o\o\t* ]] ++ getToolConfigValue 'root = kbcli-test' ++ local 'var=root = kbcli-test' ++ cat /etc/datasafed/datasafed.conf ++ grep 'root = kbcli-test' ++ awk '{print $NF}' + bucket=kbcli-test + for line in $(cat ${toolConfig}) ++ eval echo 'no_check_certificate = false' +++ echo no_check_certificate = false + line='no_check_certificate = false' + [[ no_check_certificate = false == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ no_check_certificate = false == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ no_check_certificate = false == \r\e\g\i\o\n* ]] + [[ no_check_certificate = false == \e\n\d\p\o\i\n\t* ]] + [[ no_check_certificate = false == \r\o\o\t* ]] + [[ no_check_certificate = false == \c\h\u\n\k\_\s\i\z\e* ]] + [[ no_check_certificate = false == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'no_check_bucket = false' +++ echo no_check_bucket = false + line='no_check_bucket = false' + [[ no_check_bucket = false == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ no_check_bucket = false == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ no_check_bucket = false == \r\e\g\i\o\n* ]] + [[ no_check_bucket = false == \e\n\d\p\o\i\n\t* ]] + [[ no_check_bucket = false == \r\o\o\t* ]] + [[ no_check_bucket = false == \c\h\u\n\k\_\s\i\z\e* ]] + [[ no_check_bucket = false == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'chunk_size = 50Mi' +++ echo chunk_size = 50Mi + line='chunk_size = 50Mi' + [[ chunk_size = 50Mi == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ chunk_size = 50Mi == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ chunk_size = 50Mi == \r\e\g\i\o\n* ]] + [[ chunk_size = 50Mi == \e\n\d\p\o\i\n\t* ]] + [[ chunk_size = 50Mi == \r\o\o\t* ]] + [[ chunk_size = 50Mi == \c\h\u\n\k\_\s\i\z\e* ]] ++ getToolConfigValue 'chunk_size = 50Mi' ++ local 'var=chunk_size = 50Mi' ++ cat /etc/datasafed/datasafed.conf ++ grep 'chunk_size = 50Mi' ++ awk '{print $NF}' + chunk_size=50Mi + [[ ! http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 =~ ^https?:// ]] + [[ Minio == \A\l\i\b\a\b\a ]] + [[ Minio == \T\e\n\c\e\n\t\C\O\S ]] + [[ Minio == \M\i\n\i\o ]] + export S3_FORCE_PATH_STYLE=true + S3_FORCE_PATH_STYLE=true + export S3_ACCESS_KEY=kbclitest + S3_ACCESS_KEY=kbclitest + export S3_SECRET_KEY=kbclitest + S3_SECRET_KEY=kbclitest + export S3_REGION== + S3_REGION== + export S3_ENDPOINT=http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 + S3_ENDPOINT=http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 + export S3_BUCKET=kbcli-test + S3_BUCKET=kbcli-test + export S3_PART_SIZE=50Mi + S3_PART_SIZE=50Mi + export S3_PATH=/ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142/clickhouse-7gx + S3_PATH=/ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142/clickhouse-7gx + export INIT_CLUSTER_NAME=default + INIT_CLUSTER_NAME=default + export RESTORE_SCHEMA_ON_CLUSTER=default + RESTORE_SCHEMA_ON_CLUSTER=default + export CLICKHOUSE_HOST=clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless + CLICKHOUSE_HOST=clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless + export CLICKHOUSE_USERNAME=admin + CLICKHOUSE_USERNAME=admin + export CLICKHOUSE_PASSWORD=VH838l0WO3 + CLICKHOUSE_PASSWORD=VH838l0WO3 + [[ false == \t\r\u\e ]] + DP_log 'Dynamic environment variables for clickhouse-backup have been set.' + msg='Dynamic environment variables for clickhouse-backup have been set.' ++ date -u '+%Y-%m-%d %H:%M:%S' + local 'curr_date=2026-02-11 11:14:04' + echo '2026-02-11 11:14:04 INFO: Dynamic environment variables for clickhouse-backup have been set.' 2026-02-11 11:14:04 INFO: Dynamic environment variables for clickhouse-backup have been set. + [[ '' = \t\r\u\e ]] + first_entry=clickhouse-5lm:clkhouse-icopne-backup-clickhouse-5lm-0.clkhouse-icopne-backup-clickhouse-5lm-headless.ns-uwpgk.svc.cluster.local + first_component=clickhouse-5lm + [[ -z clickhouse-5lm ]] + [[ clickhouse-5lm == \c\l\i\c\k\h\o\u\s\e\-\5\l\m\:\c\l\k\h\o\u\s\e\-\i\c\o\p\n\e\-\b\a\c\k\u\p\-\c\l\i\c\k\h\o\u\s\e\-\5\l\m\-\0\.\c\l\k\h\o\u\s\e\-\i\c\o\p\n\e\-\b\a\c\k\u\p\-\c\l\i\c\k\h\o\u\s\e\-\5\l\m\-\h\e\a\d\l\e\s\s\.\n\s\-\u\w\p\g\k\.\s\v\c\.\c\l\u\s\t\e\r\.\l\o\c\a\l ]] + mode_info=cluster:clickhouse-5lm + do_restore backup-ns-uwpgk-clkhouse-icopne-20260211191142 cluster:clickhouse-5lm + local backup_name=backup-ns-uwpgk-clkhouse-icopne-20260211191142 + local mode_info=cluster:clickhouse-5lm + local schema_db=kubeblocks + local schema_table=__restore_ready__ + restore_schema_and_sync backup-ns-uwpgk-clkhouse-icopne-20260211191142 cluster:clickhouse-5lm + local backup_name=backup-ns-uwpgk-clkhouse-icopne-20260211191142 + local mode_info=cluster:clickhouse-5lm + local schema_db=kubeblocks + local schema_table=__restore_ready__ + local timeout=1800 + local interval=5 + local should_restore_schema=false + [[ cluster:clickhouse-5lm == \s\t\a\n\d\a\l\o\n\e ]] + local first_component=clickhouse-5lm + [[ clickhouse-ngn == \c\l\i\c\k\h\o\u\s\e\-\5\l\m ]] + [[ false == \t\r\u\e ]] + DP_log 'Waiting for schema ready table on clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless...' + msg='Waiting for schema ready table on clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless...' ++ date -u '+%Y-%m-%d %H:%M:%S' + local 'curr_date=2026-02-11 11:14:04' + echo '2026-02-11 11:14:04 INFO: Waiting for schema ready table on clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless...' 2026-02-11 11:14:04 INFO: Waiting for schema ready table on clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless... ++ date +%s + local start=1770808444 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808445 + [[ 1 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808450 + [[ 6 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808455 + [[ 11 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808460 + [[ 16 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808465 + [[ 21 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808470 + [[ 26 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808475 + [[ 31 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808480 + [[ 36 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808485 + [[ 41 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808490 + [[ 46 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808495 + [[ 51 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808500 + [[ 56 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 0 == \1 ]] ++ date +%s + local now=1770808506 + [[ 62 -ge 1800 ]] + sleep 5 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 1 == \1 ]] + break + clickhouse-backup restore_remote backup-ns-uwpgk-clkhouse-icopne-20260211191142 --data 2026-02-11 11:15:11.109 INF pkg/clickhouse/clickhouse.go:120 > clickhouse connection success: tcp://clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless:9000 2026-02-11 11:15:11.109 INF pkg/clickhouse/clickhouse.go:1185 > SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' 2026-02-11 11:15:11.111 INF pkg/clickhouse/clickhouse.go:1185 > SELECT countIf(name='type') AS is_disk_type_present, countIf(name='object_storage_type') AS is_object_storage_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies') 2026-02-11 11:15:11.113 INF pkg/clickhouse/clickhouse.go:1185 > SELECT d.path AS path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d LEFT JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path 2026-02-11 11:15:11.115 INF pkg/clickhouse/clickhouse.go:1185 > SELECT max(toInt64(bytes_on_disk * 1.02)) AS max_file_size FROM system.parts WHERE active SETTINGS empty_result_for_aggregation_by_empty_set=0 2026-02-11 11:15:11.135 INF pkg/storage/general.go:222 > , list_duration=8.061179 2026-02-11 11:15:11.175 INF pkg/backup/download.go:521 > done, table_metadata=default.test_kbcli 2026-02-11 11:15:11.184 INF pkg/backup/download.go:233 > done, backup_name=backup-ns-uwpgk-clkhouse-icopne-20260211191142, duration=9ms, operation=download_data, progress=1/1, size=408B, table=default.test_kbcli, version=2.6.42 2026-02-11 11:15:11.189 INF pkg/backup/download.go:312 > done, backup=backup-ns-uwpgk-clkhouse-icopne-20260211191142, download_size=953B, duration=74ms, object_disk_size=0B, operation=download, version=2.6.42 2026-02-11 11:15:11.189 INF pkg/clickhouse/clickhouse.go:322 > clickhouse connection closed 2026-02-11 11:15:11.191 INF pkg/clickhouse/clickhouse.go:120 > clickhouse connection success: tcp://clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless:9000 2026-02-11 11:15:11.191 INF pkg/clickhouse/clickhouse.go:1185 > SELECT countIf(name='type') AS is_disk_type_present, countIf(name='object_storage_type') AS is_object_storage_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies') 2026-02-11 11:15:11.194 INF pkg/clickhouse/clickhouse.go:1185 > SELECT d.path AS path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d LEFT JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path 2026-02-11 11:15:11.196 INF pkg/clickhouse/clickhouse.go:1183 > CREATE DATABASE IF NOT EXISTS `default` ON CLUSTER 'default' ENGINE = Atomic with args []interface {}{[]interface {}(nil)} 2026-02-11 11:15:11.385 INF pkg/clickhouse/clickhouse.go:1183 > SELECT name, count(*) as is_present FROM system.settings WHERE name IN (?, ?) GROUP BY name with args []interface {}{"display_secrets_in_show_and_select", "show_table_uuid_in_table_create_query_if_not_nil"} 2026-02-11 11:15:11.388 INF pkg/clickhouse/clickhouse.go:1185 > SELECT name FROM system.databases WHERE engine IN ('MySQL','PostgreSQL','MaterializedPostgreSQL') 2026-02-11 11:15:11.389 INF pkg/clickhouse/clickhouse.go:1185 > SELECT countIf(name='data_path') is_data_path_present, countIf(name='data_paths') is_data_paths_present, countIf(name='uuid') is_uuid_present, countIf(name='create_table_query') is_create_table_query_present, countIf(name='total_bytes') is_total_bytes_present FROM system.columns WHERE database='system' AND table='tables' 2026-02-11 11:15:11.391 INF pkg/clickhouse/clickhouse.go:1185 > SELECT database, name, engine , data_paths , uuid , create_table_query , coalesce(total_bytes, 0) AS total_bytes FROM system.tables WHERE is_temporary = 0 AND match(concat(database,'.',name),'^.*$') ORDER BY total_bytes DESC SETTINGS show_table_uuid_in_table_create_query_if_not_nil=1 2026-02-11 11:15:11.397 INF pkg/clickhouse/clickhouse.go:1185 > SELECT metadata_path FROM system.tables WHERE database = 'system' AND metadata_path!='' LIMIT 1 2026-02-11 11:15:11.399 INF pkg/clickhouse/clickhouse.go:1185 > SELECT sum(bytes_on_disk) as size FROM system.parts WHERE active AND database='executions_loop' AND table='executions_loop_table' GROUP BY database, table 2026-02-11 11:15:11.400 INF pkg/clickhouse/clickhouse.go:322 > clickhouse connection closed 2026-02-11 11:15:11.400 FTL cmd/clickhouse-backup/main.go:820 > , error='default.test_kbcli' is not created. Restore schema first or create missing tables manually + DP_error_log 'Clickhouse-backup restore_remote --data FAILED' + msg='Clickhouse-backup restore_remote --data FAILED' ++ date -u '+%Y-%m-%d %H:%M:%S' + local 'curr_date=2026-02-11 11:15:11' + echo '2026-02-11 11:15:11 ERROR: Clickhouse-backup restore_remote --data FAILED' + return 1 + exit 1 + handle_exit 2026-02-11 11:15:11 ERROR: Clickhouse-backup restore_remote --data FAILED + exit_code=1 + '[' 1 -ne 0 ']' + DP_error_log 'Backup failed with exit code 1' + msg='Backup failed with exit code 1' ++ date -u '+%Y-%m-%d %H:%M:%S' + local 'curr_date=2026-02-11 11:15:11' + echo '2026-02-11 11:15:11 ERROR: Backup failed with exit code 1' + touch .exit 2026-02-11 11:15:11 ERROR: Backup failed with exit code 1 + exit 1 ------------------------------------------------------------------------------------------------------------------  `kubectl logs restore-post-ready-00cc9a83-backup-ns-uwpgk-clkhouse-icop-zg5x8 --namespace ns-uwpgk --tail 500`(B  + trap handle_exit EXIT + generate_backup_config ++ mktemp + clickhouse_backup_config=/tmp/tmp.HTBAj96mI8 + cat + export CLICKHOUSE_BACKUP_CONFIG=/tmp/tmp.HTBAj96mI8 + CLICKHOUSE_BACKUP_CONFIG=/tmp/tmp.HTBAj96mI8 + set_clickhouse_backup_config_env + toolConfig=/etc/datasafed/datasafed.conf + '[' '!' -f /etc/datasafed/datasafed.conf ']' + local provider= + local access_key_id= + local secret_access_key= + local region= + local endpoint= + local bucket= + IFS=' ' ++ cat /etc/datasafed/datasafed.conf + for line in $(cat ${toolConfig}) ++ eval echo '[storage]' +++ echo '[storage]' + line='[storage]' + [[ [storage] == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ [storage] == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ [storage] == \r\e\g\i\o\n* ]] + [[ [storage] == \e\n\d\p\o\i\n\t* ]] + [[ [storage] == \r\o\o\t* ]] + [[ [storage] == \c\h\u\n\k\_\s\i\z\e* ]] + [[ [storage] == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'type = s3' +++ echo type = s3 + line='type = s3' + [[ type = s3 == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ type = s3 == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ type = s3 == \r\e\g\i\o\n* ]] + [[ type = s3 == \e\n\d\p\o\i\n\t* ]] + [[ type = s3 == \r\o\o\t* ]] + [[ type = s3 == \c\h\u\n\k\_\s\i\z\e* ]] + [[ type = s3 == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'provider = Minio' +++ echo provider = Minio + line='provider = Minio' + [[ provider = Minio == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ provider = Minio == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ provider = Minio == \r\e\g\i\o\n* ]] + [[ provider = Minio == \e\n\d\p\o\i\n\t* ]] + [[ provider = Minio == \r\o\o\t* ]] + [[ provider = Minio == \c\h\u\n\k\_\s\i\z\e* ]] + [[ provider = Minio == \p\r\o\v\i\d\e\r* ]] ++ getToolConfigValue 'provider = Minio' ++ local 'var=provider = Minio' ++ cat /etc/datasafed/datasafed.conf ++ grep 'provider = Minio' ++ awk '{print $NF}' + provider=Minio + for line in $(cat ${toolConfig}) ++ eval echo 'env_auth = false' +++ echo env_auth = false + line='env_auth = false' + [[ env_auth = false == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ env_auth = false == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ env_auth = false == \r\e\g\i\o\n* ]] + [[ env_auth = false == \e\n\d\p\o\i\n\t* ]] + [[ env_auth = false == \r\o\o\t* ]] + [[ env_auth = false == \c\h\u\n\k\_\s\i\z\e* ]] + [[ env_auth = false == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'access_key_id = kbclitest' +++ echo access_key_id = kbclitest + line='access_key_id = kbclitest' + [[ access_key_id = kbclitest == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] ++ getToolConfigValue 'access_key_id = kbclitest' ++ local 'var=access_key_id = kbclitest' ++ cat /etc/datasafed/datasafed.conf ++ grep 'access_key_id = kbclitest' ++ awk '{print $NF}' + access_key_id=kbclitest + for line in $(cat ${toolConfig}) ++ eval echo 'secret_access_key = kbclitest' +++ echo secret_access_key = kbclitest + line='secret_access_key = kbclitest' + [[ secret_access_key = kbclitest == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ secret_access_key = kbclitest == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] ++ getToolConfigValue 'secret_access_key = kbclitest' ++ local 'var=secret_access_key = kbclitest' ++ cat /etc/datasafed/datasafed.conf ++ grep 'secret_access_key = kbclitest' ++ awk '{print $NF}' + secret_access_key=kbclitest + for line in $(cat ${toolConfig}) ++ eval echo 'region = ' +++ echo region = + line='region =' + [[ region = == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ region = == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ region = == \r\e\g\i\o\n* ]] ++ getToolConfigValue 'region =' ++ local 'var=region =' ++ cat /etc/datasafed/datasafed.conf ++ grep 'region =' ++ awk '{print $NF}' + region== + for line in $(cat ${toolConfig}) ++ eval echo 'endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' +++ echo endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 + line='endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' + [[ endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 == \r\e\g\i\o\n* ]] + [[ endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 == \e\n\d\p\o\i\n\t* ]] ++ getToolConfigValue 'endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' ++ local 'var=endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' ++ cat /etc/datasafed/datasafed.conf ++ grep 'endpoint = http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000' ++ awk '{print $NF}' + endpoint=http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 + for line in $(cat ${toolConfig}) ++ eval echo 'root = kbcli-test' +++ echo root = kbcli-test + line='root = kbcli-test' + [[ root = kbcli-test == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ root = kbcli-test == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ root = kbcli-test == \r\e\g\i\o\n* ]] + [[ root = kbcli-test == \e\n\d\p\o\i\n\t* ]] + [[ root = kbcli-test == \r\o\o\t* ]] ++ getToolConfigValue 'root = kbcli-test' ++ local 'var=root = kbcli-test' ++ cat /etc/datasafed/datasafed.conf ++ grep 'root = kbcli-test' ++ awk '{print $NF}' + bucket=kbcli-test + for line in $(cat ${toolConfig}) ++ eval echo 'no_check_certificate = false' +++ echo no_check_certificate = false + line='no_check_certificate = false' + [[ no_check_certificate = false == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ no_check_certificate = false == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ no_check_certificate = false == \r\e\g\i\o\n* ]] + [[ no_check_certificate = false == \e\n\d\p\o\i\n\t* ]] + [[ no_check_certificate = false == \r\o\o\t* ]] + [[ no_check_certificate = false == \c\h\u\n\k\_\s\i\z\e* ]] + [[ no_check_certificate = false == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'no_check_bucket = false' +++ echo no_check_bucket = false + line='no_check_bucket = false' + [[ no_check_bucket = false == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ no_check_bucket = false == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ no_check_bucket = false == \r\e\g\i\o\n* ]] + [[ no_check_bucket = false == \e\n\d\p\o\i\n\t* ]] + [[ no_check_bucket = false == \r\o\o\t* ]] + [[ no_check_bucket = false == \c\h\u\n\k\_\s\i\z\e* ]] + [[ no_check_bucket = false == \p\r\o\v\i\d\e\r* ]] + for line in $(cat ${toolConfig}) ++ eval echo 'chunk_size = 50Mi' +++ echo chunk_size = 50Mi + line='chunk_size = 50Mi' + [[ chunk_size = 50Mi == \a\c\c\e\s\s\_\k\e\y\_\i\d* ]] + [[ chunk_size = 50Mi == \s\e\c\r\e\t\_\a\c\c\e\s\s\_\k\e\y* ]] + [[ chunk_size = 50Mi == \r\e\g\i\o\n* ]] + [[ chunk_size = 50Mi == \e\n\d\p\o\i\n\t* ]] + [[ chunk_size = 50Mi == \r\o\o\t* ]] + [[ chunk_size = 50Mi == \c\h\u\n\k\_\s\i\z\e* ]] ++ getToolConfigValue 'chunk_size = 50Mi' ++ local 'var=chunk_size = 50Mi' ++ cat /etc/datasafed/datasafed.conf ++ grep 'chunk_size = 50Mi' ++ awk '{print $NF}' + chunk_size=50Mi + [[ ! http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 =~ ^https?:// ]] + [[ Minio == \A\l\i\b\a\b\a ]] + [[ Minio == \T\e\n\c\e\n\t\C\O\S ]] + [[ Minio == \M\i\n\i\o ]] + export S3_FORCE_PATH_STYLE=true + S3_FORCE_PATH_STYLE=true + export S3_ACCESS_KEY=kbclitest + S3_ACCESS_KEY=kbclitest + export S3_SECRET_KEY=kbclitest + S3_SECRET_KEY=kbclitest + export S3_REGION== + S3_REGION== + export S3_ENDPOINT=http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 + S3_ENDPOINT=http://kbcli-test-minio.kb-qxtxx.svc.cluster.local:9000 + export S3_BUCKET=kbcli-test + S3_BUCKET=kbcli-test + export S3_PART_SIZE=50Mi + S3_PART_SIZE=50Mi + export S3_PATH=/ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142/clickhouse-7gx + S3_PATH=/ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142/clickhouse-7gx + export INIT_CLUSTER_NAME=default + INIT_CLUSTER_NAME=default + export RESTORE_SCHEMA_ON_CLUSTER=default + RESTORE_SCHEMA_ON_CLUSTER=default + export CLICKHOUSE_HOST=clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless + CLICKHOUSE_HOST=clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless + export CLICKHOUSE_USERNAME=admin + CLICKHOUSE_USERNAME=admin + export CLICKHOUSE_PASSWORD=VH838l0WO3 + CLICKHOUSE_PASSWORD=VH838l0WO3 + [[ false == \t\r\u\e ]] + DP_log 'Dynamic environment variables for clickhouse-backup have been set.' + msg='Dynamic environment variables for clickhouse-backup have been set.' ++ date -u '+%Y-%m-%d %H:%M:%S' + local 'curr_date=2026-02-11 11:15:25' + echo '2026-02-11 11:15:25 INFO: Dynamic environment variables for clickhouse-backup have been set.' 2026-02-11 11:15:25 INFO: Dynamic environment variables for clickhouse-backup have been set. + [[ '' = \t\r\u\e ]] + first_entry=clickhouse-5lm:clkhouse-icopne-backup-clickhouse-5lm-0.clkhouse-icopne-backup-clickhouse-5lm-headless.ns-uwpgk.svc.cluster.local + first_component=clickhouse-5lm + [[ -z clickhouse-5lm ]] + [[ clickhouse-5lm == \c\l\i\c\k\h\o\u\s\e\-\5\l\m\:\c\l\k\h\o\u\s\e\-\i\c\o\p\n\e\-\b\a\c\k\u\p\-\c\l\i\c\k\h\o\u\s\e\-\5\l\m\-\0\.\c\l\k\h\o\u\s\e\-\i\c\o\p\n\e\-\b\a\c\k\u\p\-\c\l\i\c\k\h\o\u\s\e\-\5\l\m\-\h\e\a\d\l\e\s\s\.\n\s\-\u\w\p\g\k\.\s\v\c\.\c\l\u\s\t\e\r\.\l\o\c\a\l ]] + mode_info=cluster:clickhouse-5lm + do_restore backup-ns-uwpgk-clkhouse-icopne-20260211191142 cluster:clickhouse-5lm + local backup_name=backup-ns-uwpgk-clkhouse-icopne-20260211191142 + local mode_info=cluster:clickhouse-5lm + local schema_db=kubeblocks + local schema_table=__restore_ready__ + restore_schema_and_sync backup-ns-uwpgk-clkhouse-icopne-20260211191142 cluster:clickhouse-5lm + local backup_name=backup-ns-uwpgk-clkhouse-icopne-20260211191142 + local mode_info=cluster:clickhouse-5lm + local schema_db=kubeblocks + local schema_table=__restore_ready__ + local timeout=1800 + local interval=5 + local should_restore_schema=false + [[ cluster:clickhouse-5lm == \s\t\a\n\d\a\l\o\n\e ]] + local first_component=clickhouse-5lm + [[ clickhouse-ngn == \c\l\i\c\k\h\o\u\s\e\-\5\l\m ]] + [[ false == \t\r\u\e ]] + DP_log 'Waiting for schema ready table on clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless...' + msg='Waiting for schema ready table on clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless...' ++ date -u '+%Y-%m-%d %H:%M:%S' + local 'curr_date=2026-02-11 11:15:25' + echo '2026-02-11 11:15:25 INFO: Waiting for schema ready table on clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless...' 2026-02-11 11:15:25 INFO: Waiting for schema ready table on clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless... ++ date +%s + local start=1770808525 + true ++ ch_query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local 'query=EXISTS TABLE `kubeblocks`.`__restore_ready__`' ++ local ch_port=9000 ++ ch_args=('--user' 'admin' '--password' 'VH838l0WO3' '--host' 'clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless' '--port' '9000' '--connect_timeout=5') ++ local ch_args ++ clickhouse-client --user admin --password VH838l0WO3 --host clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless --port 9000 --connect_timeout=5 --query 'EXISTS TABLE `kubeblocks`.`__restore_ready__`' + [[ 1 == \1 ]] + break + clickhouse-backup restore_remote backup-ns-uwpgk-clkhouse-icopne-20260211191142 --data 2026-02-11 11:15:25.153 INF pkg/clickhouse/clickhouse.go:120 > clickhouse connection success: tcp://clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless:9000 2026-02-11 11:15:25.153 INF pkg/clickhouse/clickhouse.go:1185 > SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' 2026-02-11 11:15:25.154 INF pkg/clickhouse/clickhouse.go:1185 > SELECT countIf(name='type') AS is_disk_type_present, countIf(name='object_storage_type') AS is_object_storage_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies') 2026-02-11 11:15:25.156 INF pkg/clickhouse/clickhouse.go:1185 > SELECT d.path AS path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d LEFT JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path 2026-02-11 11:15:25.158 WRN pkg/backup/download.go:96 > backup-ns-uwpgk-clkhouse-icopne-20260211191142 already exists will try to resume download 2026-02-11 11:15:25.158 INF pkg/clickhouse/clickhouse.go:1185 > SELECT max(toInt64(bytes_on_disk * 1.02)) AS max_file_size FROM system.parts WHERE active SETTINGS empty_result_for_aggregation_by_empty_set=0 2026-02-11 11:15:25.175 INF pkg/storage/general.go:222 > , list_duration=7.346462 2026-02-11 11:15:25.195 INF pkg/resumable/state.go:180 > /bitnami/clickhouse/data/backup/backup-ns-uwpgk-clkhouse-icopne-20260211191142/metadata/default/test_kbcli.json already processed, size 545B 2026-02-11 11:15:25.195 INF pkg/backup/download.go:521 > done, table_metadata=default.test_kbcli 2026-02-11 11:15:25.195 INF pkg/resumable/state.go:180 > backup-ns-uwpgk-clkhouse-icopne-20260211191142/shadow/default/test_kbcli/default_all_1_1_0.tar already processed, size 408B 2026-02-11 11:15:25.195 INF pkg/backup/download.go:233 > done, backup_name=backup-ns-uwpgk-clkhouse-icopne-20260211191142, duration=0s, operation=download_data, progress=1/1, size=408B, table=default.test_kbcli, version=2.6.42 2026-02-11 11:15:25.200 INF pkg/backup/download.go:312 > done, backup=backup-ns-uwpgk-clkhouse-icopne-20260211191142, download_size=953B, duration=42ms, object_disk_size=0B, operation=download, version=2.6.42 2026-02-11 11:15:25.201 INF pkg/clickhouse/clickhouse.go:322 > clickhouse connection closed 2026-02-11 11:15:25.202 INF pkg/clickhouse/clickhouse.go:120 > clickhouse connection success: tcp://clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless:9000 2026-02-11 11:15:25.202 INF pkg/clickhouse/clickhouse.go:1185 > SELECT countIf(name='type') AS is_disk_type_present, countIf(name='object_storage_type') AS is_object_storage_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies') 2026-02-11 11:15:25.226 INF pkg/clickhouse/clickhouse.go:1185 > SELECT d.path AS path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d LEFT JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path 2026-02-11 11:15:25.293 INF pkg/clickhouse/clickhouse.go:1183 > CREATE DATABASE IF NOT EXISTS `default` ON CLUSTER 'default' ENGINE = Atomic with args []interface {}{[]interface {}(nil)} 2026-02-11 11:15:25.497 INF pkg/clickhouse/clickhouse.go:1183 > SELECT name, count(*) as is_present FROM system.settings WHERE name IN (?, ?) GROUP BY name with args []interface {}{"show_table_uuid_in_table_create_query_if_not_nil", "display_secrets_in_show_and_select"} 2026-02-11 11:15:25.500 INF pkg/clickhouse/clickhouse.go:1185 > SELECT name FROM system.databases WHERE engine IN ('MySQL','PostgreSQL','MaterializedPostgreSQL') 2026-02-11 11:15:25.501 INF pkg/clickhouse/clickhouse.go:1185 > SELECT countIf(name='data_path') is_data_path_present, countIf(name='data_paths') is_data_paths_present, countIf(name='uuid') is_uuid_present, countIf(name='create_table_query') is_create_table_query_present, countIf(name='total_bytes') is_total_bytes_present FROM system.columns WHERE database='system' AND table='tables' 2026-02-11 11:15:25.503 INF pkg/clickhouse/clickhouse.go:1185 > SELECT database, name, engine , data_paths , uuid , create_table_query , coalesce(total_bytes, 0) AS total_bytes FROM system.tables WHERE is_temporary = 0 AND match(concat(database,'.',name),'^.*$') ORDER BY total_bytes DESC SETTINGS show_table_uuid_in_table_create_query_if_not_nil=1 2026-02-11 11:15:25.509 INF pkg/clickhouse/clickhouse.go:1185 > SELECT metadata_path FROM system.tables WHERE database = 'system' AND metadata_path!='' LIMIT 1 2026-02-11 11:15:25.511 INF pkg/clickhouse/clickhouse.go:1185 > SELECT sum(bytes_on_disk) as size FROM system.parts WHERE active AND database='executions_loop' AND table='executions_loop_table' GROUP BY database, table 2026-02-11 11:15:25.513 INF pkg/clickhouse/clickhouse.go:322 > clickhouse connection closed 2026-02-11 11:15:25.513 FTL cmd/clickhouse-backup/main.go:820 > , error='default.test_kbcli' is not created. Restore schema first or create missing tables manually + DP_error_log 'Clickhouse-backup restore_remote --data FAILED' + msg='Clickhouse-backup restore_remote --data FAILED' ++ date -u '+%Y-%m-%d %H:%M:%S' + local 'curr_date=2026-02-11 11:15:25' + echo '2026-02-11 11:15:25 ERROR: Clickhouse-backup restore_remote --data FAILED' 2026-02-11 11:15:25 ERROR: Clickhouse-backup restore_remote --data FAILED + return 1 + exit 1 + handle_exit + exit_code=1 + '[' 1 -ne 0 ']' + DP_error_log 'Backup failed with exit code 1' + msg='Backup failed with exit code 1' ++ date -u '+%Y-%m-%d %H:%M:%S' + local 'curr_date=2026-02-11 11:15:25' 2026-02-11 11:15:25 ERROR: Backup failed with exit code 1 + echo '2026-02-11 11:15:25 ERROR: Backup failed with exit code 1' + touch .exit + exit 1 ------------------------------------------------------------------------------------------------------------------  `kbcli cluster describe-backup --names backup-ns-uwpgk-clkhouse-icopne-20260211191142 --namespace ns-uwpgk `(B  Name: backup-ns-uwpgk-clkhouse-icopne-20260211191142 Cluster: clkhouse-icopne Namespace: ns-uwpgk Spec: Method: full Policy Name: clkhouse-icopne-clickhouse-backup-policy Actions: dp-backup-clickhouse-7gx-0: ActionType: Job WorkloadName: dp-backup-clickhouse-7gx-0-backup-ns-uwpgk-clkhouse-icopne-2026 TargetPodName: clkhouse-icopne-clickhouse-7gx-0 Phase: Completed Start Time: Feb 11,2026 19:11 UTC+0800 Completion Time: Feb 11,2026 19:12 UTC+0800 dp-backup-clickhouse-6x4-0: ActionType: Job WorkloadName: dp-backup-clickhouse-6x4-0-backup-ns-uwpgk-clkhouse-icopne-2026 TargetPodName: clkhouse-icopne-clickhouse-6x4-0 Phase: Completed Start Time: Feb 11,2026 19:11 UTC+0800 Completion Time: Feb 11,2026 19:12 UTC+0800 Status: Phase: Completed Total Size: 20541 ActionSet Name: clickhouse-full-backup Repository: backuprepo-kbcli-test Duration: 23s Start Time: Feb 11,2026 19:11 UTC+0800 Completion Time: Feb 11,2026 19:12 UTC+0800 Path: /ns-uwpgk/clkhouse-icopne-d12ce1e8-3113-4112-90de-9bdd66a10e52/clickhouse/backup-ns-uwpgk-clkhouse-icopne-20260211191142 Warning Events: cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=clkhouse-icopne-backup`(B  set secret: clkhouse-icopne-backup-clickhouse-5lm-account-admin  `kubectl get secrets clkhouse-icopne-backup-clickhouse-5lm-account-admin -o jsonpath="{.data.username}"`(B   `kubectl get secrets clkhouse-icopne-backup-clickhouse-5lm-account-admin -o jsonpath="{.data.password}"`(B   `kubectl get secrets clkhouse-icopne-backup-clickhouse-5lm-account-admin -o jsonpath="{.data.port}"`(B  DB_USERNAME_PROXY:;DB_PASSWORD_PROXY:;DB_PORT_PROXY:8123(B DB_USERNAME:admin;DB_PASSWORD:VH838l0WO3;DB_PORT:9000;DB_DATABASE:default(B  `echo 'clickhouse-client --host clkhouse-icopne-backup-clickhouse-5lm.ns-uwpgk.svc.cluster.local --port 9000 --user admin --password "VH838l0WO3" --query "SELECT * FROM system.clusters"' | kubectl exec -it clkhouse-icopne-backup-clickhouse-5lm-0 --namespace ns-uwpgk -- bash `(B  default 1 1 1 clkhouse-icopne-backup-clickhouse-5lm-0.clkhouse-icopne-backup-clickhouse-5lm-headless.ns-uwpgk.svc.cluster.local 10.244.3.163 9000 1 admin 0 0 0 default 1 1 2 clkhouse-icopne-backup-clickhouse-5lm-1.clkhouse-icopne-backup-clickhouse-5lm-headless.ns-uwpgk.svc.cluster.local 10.244.2.123 9000 0 admin 0 0 0 default 2 1 1 clkhouse-icopne-backup-clickhouse-ngn-0.clkhouse-icopne-backup-clickhouse-ngn-headless.ns-uwpgk.svc.cluster.local 10.244.4.27 9000 0 admin 0 0 0 default 2 1 2 clkhouse-icopne-backup-clickhouse-ngn-1.clkhouse-icopne-backup-clickhouse-ngn-headless.ns-uwpgk.svc.cluster.local 10.244.2.227 9000 0 admin 0 0 0 connect cluster Success(B delete cluster clkhouse-icopne-backup  `kbcli cluster delete clkhouse-icopne-backup --auto-approve --namespace ns-uwpgk `(B  pod_info:clkhouse-icopne-backup-ch-keeper-0 2/2 Running 0 7m58s clkhouse-icopne-backup-ch-keeper-1 2/2 Running 0 7m59s clkhouse-icopne-backup-ch-keeper-2 2/2 Running 0 7m58s clkhouse-icopne-backup-clickhouse-5lm-0 2/2 Running 1 (5m21s ago) 6m58s clkhouse-icopne-backup-clickhouse-5lm-1 2/2 Running 0 6m58s clkhouse-icopne-backup-clickhouse-ngn-0 2/2 Running 0 6m58s clkhouse-icopne-backup-clickhouse-ngn-1 2/2 Running 0 6m58s Cluster clkhouse-icopne-backup deleted delete cluster pod done(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B [Error] check cluster resource non-exist TIMED-OUT: pvc(B data-clkhouse-icopne-backup-clickhouse-ngn-0  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pvc data-clkhouse-icopne-backup-clickhouse-ngn-0 --namespace ns-uwpgk `(B  persistentvolumeclaim/data-clkhouse-icopne-backup-clickhouse-ngn-0 patched delete cluster done(B cluster delete backup  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge backups backup-ns-uwpgk-clkhouse-icopne-20260211191142 --namespace ns-uwpgk `(B  backup.dataprotection.kubeblocks.io/backup-ns-uwpgk-clkhouse-icopne-20260211191142 patched  `kbcli cluster delete-backup clkhouse-icopne --name backup-ns-uwpgk-clkhouse-icopne-20260211191142 --force --auto-approve --namespace ns-uwpgk `(B  Backup backup-ns-uwpgk-clkhouse-icopne-20260211191142 deleted get cluster clkhouse-icopne shard clickhouse component name  `kubectl get component -l "app.kubernetes.io/instance=clkhouse-icopne,apps.kubeblocks.io/sharding-name=clickhouse" --namespace ns-uwpgk`(B  set shard component name:clickhouse-6x4 cluster list-logs  `kbcli cluster list-logs clkhouse-icopne --component clickhouse-6x4 --namespace ns-uwpgk `(B  cluster logs  `kbcli cluster logs clkhouse-icopne --tail 30 --namespace ns-uwpgk `(B  10. __clone @ 0xfca2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.3.18.37 (official build)) 2026.02.11 11:18:03.000730 [ 129 ] {} void DB::AsynchronousMetrics::update(std::chrono::system_clock::time_point): Code: 74. DB::ErrnoException: Cannot read from file /sys/block/sde/stat, errno: 19, strerror: No such device. (CANNOT_READ_FROM_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xb3aac1a in /opt/bitnami/clickhouse/bin/clickhouse 1. DB::throwFromErrnoWithPath(std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, int, int) @ 0xb3ac00a in /opt/bitnami/clickhouse/bin/clickhouse 2. DB::ReadBufferFromFileDescriptor::nextImpl() @ 0xb3f5900 in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::AsynchronousMetrics::BlockDeviceStatValues::read(DB::ReadBuffer&) @ 0x1535ca11 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::AsynchronousMetrics::update(std::__1::chrono::time_point > >) @ 0x15351d2a in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::AsynchronousMetrics::run() @ 0x1535beee in /opt/bitnami/clickhouse/bin/clickhouse 6. ? @ 0x15360730 in /opt/bitnami/clickhouse/bin/clickhouse 7. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xb44f9f7 in /opt/bitnami/clickhouse/bin/clickhouse 8. ? @ 0xb45357d in /opt/bitnami/clickhouse/bin/clickhouse 9. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 10. __clone @ 0xfca2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.3.18.37 (official build)) 2026.02.11 11:20:39.000842 [ 129 ] {} void DB::AsynchronousMetrics::update(std::chrono::system_clock::time_point): Code: 74. DB::ErrnoException: Cannot read from file /sys/block/sdd/stat, errno: 19, strerror: No such device. (CANNOT_READ_FROM_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below): 0. DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int, bool) @ 0xb3aac1a in /opt/bitnami/clickhouse/bin/clickhouse 1. DB::throwFromErrnoWithPath(std::__1::basic_string, std::__1::allocator > const&, std::__1::basic_string, std::__1::allocator > const&, int, int) @ 0xb3ac00a in /opt/bitnami/clickhouse/bin/clickhouse 2. DB::ReadBufferFromFileDescriptor::nextImpl() @ 0xb3f5900 in /opt/bitnami/clickhouse/bin/clickhouse 3. DB::AsynchronousMetrics::BlockDeviceStatValues::read(DB::ReadBuffer&) @ 0x1535ca11 in /opt/bitnami/clickhouse/bin/clickhouse 4. DB::AsynchronousMetrics::update(std::__1::chrono::time_point > >) @ 0x15351d2a in /opt/bitnami/clickhouse/bin/clickhouse 5. DB::AsynchronousMetrics::run() @ 0x1535beee in /opt/bitnami/clickhouse/bin/clickhouse 6. ? @ 0x15360730 in /opt/bitnami/clickhouse/bin/clickhouse 7. ThreadPoolImpl::worker(std::__1::__list_iterator) @ 0xb44f9f7 in /opt/bitnami/clickhouse/bin/clickhouse 8. ? @ 0xb45357d in /opt/bitnami/clickhouse/bin/clickhouse 9. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so 10. __clone @ 0xfca2f in /lib/x86_64-linux-gnu/libc-2.31.so (version 22.3.18.37 (official build)) delete cluster clkhouse-icopne  `kbcli cluster delete clkhouse-icopne --auto-approve --namespace ns-uwpgk `(B  pod_info:clkhouse-icopne-ch-keeper-0 2/2 Running 0 26m clkhouse-icopne-ch-keeper-1 2/2 Running 0 26m clkhouse-icopne-ch-keeper-2 2/2 Running 0 27m clkhouse-icopne-clickhouse-6x4-0 2/2 Running 0 35m clkhouse-icopne-clickhouse-6x4-1 2/2 Running 0 35m clkhouse-icopne-clickhouse-7gx-0 2/2 Running 2 (21m ago) 35m clkhouse-icopne-clickhouse-7gx-1 2/2 Running 0 36m Cluster clkhouse-icopne deleted pod_info:clkhouse-icopne-ch-keeper-2 2/2 Terminating 0 27m delete cluster pod done(B check cluster resource non-exist OK: pvc(B delete cluster done(B Clickhouse Test Suite All Done!(B Test Engine: clickhouse Test Type: 29 --------------------------------------Clickhouse (Topology = cluster Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[Topology=cluster;ComponentDefinition=clickhouse-1.0.2;ComponentVersion=clickhouse;ServiceVersion=22.3.18;]|[Description=Create a cluster with the specified topology cluster with the specified component definition clickhouse-1.0.2 and component version clickhouse and service version 22.3.18](B [PASSED]|[Connect]|[ComponentName=clickhouse-fwl]|[Description=Connect to the cluster](B [PASSED]|[VerticalScaling]|[ComponentName=clickhouse]|[Description=VerticalScaling the cluster specify component clickhouse](B [PASSED]|[Stop]|[-]|[Description=Stop the cluster](B [PASSED]|[Start]|[-]|[Description=Start the cluster](B [PASSED]|[Upgrade]|[ComponentName=ch-keeper;ComponentVersionFrom=22.3.18;ComponentVersionTo=22.3.20]|[Description=Upgrade the cluster specify component ch-keeper service version from 22.3.18 to 22.3.20](B [PASSED]|[Upgrade]|[ComponentName=ch-keeper;ComponentVersionFrom=22.3.20;ComponentVersionTo=22.8.21]|[Description=Upgrade the cluster specify component ch-keeper service version from 22.3.20 to 22.8.21](B [PASSED]|[Upgrade]|[ComponentName=ch-keeper;ComponentVersionFrom=22.8.21;ComponentVersionTo=22.3.20]|[Description=Upgrade the cluster specify component ch-keeper service version from 22.8.21 to 22.3.20](B [PASSED]|[Upgrade]|[ComponentName=ch-keeper;ComponentVersionFrom=22.3.20;ComponentVersionTo=22.3.18]|[Description=Upgrade the cluster specify component ch-keeper service version from 22.3.20 to 22.3.18](B [PASSED]|[NoFailover]|[HA=Full CPU;Durations=2m;ComponentName=clickhouse-fwl]|[Description=Simulates conditions where pods experience CPU full either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high CPU load.](B [PASSED]|[NoFailover]|[HA=Pod Kill;ComponentName=clickhouse-fwl]|[Description=Simulates conditions where pods experience kill for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to kill.](B [PASSED]|[VolumeExpansion]|[ComponentName=clickhouse]|[Description=VolumeExpansion the cluster specify component clickhouse](B [PASSED]|[NoFailover]|[HA=Pod Failure;Durations=2m;ComponentName=clickhouse-fwl]|[Description=Simulates conditions where pods experience failure for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to failure.](B [PASSED]|[VerticalScaling]|[ComponentName=ch-keeper]|[Description=VerticalScaling the cluster specify component ch-keeper](B [PASSED]|[Upgrade]|[ComponentName=clickhouse;ComponentVersionFrom=22.3.18;ComponentVersionTo=22.3.20]|[Description=Upgrade the cluster specify component clickhouse service version from 22.3.18 to 22.3.20](B [PASSED]|[Upgrade]|[ComponentName=clickhouse;ComponentVersionFrom=22.3.20;ComponentVersionTo=22.8.21]|[Description=Upgrade the cluster specify component clickhouse service version from 22.3.20 to 22.8.21](B [PASSED]|[Upgrade]|[ComponentName=clickhouse;ComponentVersionFrom=22.8.21;ComponentVersionTo=22.3.20]|[Description=Upgrade the cluster specify component clickhouse service version from 22.8.21 to 22.3.20](B [PASSED]|[Upgrade]|[ComponentName=clickhouse;ComponentVersionFrom=22.3.20;ComponentVersionTo=22.3.18]|[Description=Upgrade the cluster specify component clickhouse service version from 22.3.20 to 22.3.18](B [PASSED]|[Scale Out Shard Post]|[ShardsName=clickhouse]|[Description=-](B [PASSED]|[HorizontalScaling Out]|[ShardsName=clickhouse]|[Description=HorizontalScaling Out the cluster](B [PASSED]|[HorizontalScaling In]|[ShardsName=clickhouse]|[Description=HorizontalScaling In the cluster](B [PASSED]|[NoFailover]|[HA=Network Loss;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates network loss fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to loss network.](B [PASSED]|[NoFailover]|[HA=DNS Random;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates conditions where pods experience random IP addresses being returned by the DNS service for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to the DNS service returning random IP addresses.](B [PASSED]|[NoFailover]|[HA=Time Offset;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates a time offset scenario thereby testing the application's resilience to potential slowness/unavailability of some replicas due to time offset.](B [PASSED]|[Restart]|[-]|[Description=Restart the cluster](B [PASSED]|[Restart]|[ComponentName=clickhouse]|[Description=Restart the cluster specify component clickhouse](B [PASSED]|[VolumeExpansion]|[ComponentName=ch-keeper]|[Description=VolumeExpansion the cluster specify component ch-keeper](B [PASSED]|[NoFailover]|[HA=Network Duplicate;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates network duplicate fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to duplicate network.](B [PASSED]|[NoFailover]|[HA=Kill 1;ComponentName=clickhouse-7gx]|[Description=Simulates conditions where process 1 killed either due to expected/undesired processes thereby testing the application's resilience to unavailability of some replicas due to abnormal termination signals.](B [PASSED]|[Restart]|[ComponentName=ch-keeper]|[Description=Restart the cluster specify component ch-keeper](B [PASSED]|[NoFailover]|[HA=Network Bandwidth;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates network bandwidth fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to bandwidth network.](B [PASSED]|[Connect]|[Endpoints=true]|[Description=Connect to the cluster](B [PASSED]|[NoFailover]|[HA=OOM;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates conditions where pods experience OOM either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Memory load.](B [PASSED]|[HorizontalScaling Out]|[ComponentName=clickhouse]|[Description=HorizontalScaling Out the cluster specify component clickhouse](B [PASSED]|[HorizontalScaling In]|[ComponentName=clickhouse]|[Description=HorizontalScaling In the cluster specify component clickhouse](B [PASSED]|[NoFailover]|[HA=Network Corrupt;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates network corrupt fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to corrupt network.](B [PASSED]|[NoFailover]|[HA=Connection Stress;ComponentName=clickhouse-7gx]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.](B [PASSED]|[NoFailover]|[HA=Network Partition;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates network partition fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to partition network.](B [PASSED]|[NoFailover]|[HA=DNS Error;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates conditions where pods experience DNS service errors for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to DNS service errors.](B [PASSED]|[NoFailover]|[HA=Network Delay;Durations=2m;ComponentName=clickhouse-7gx]|[Description=Simulates network delay fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to delay network.](B [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut](B [PASSED]|[Backup]|[BackupMethod=full]|[Description=The cluster full Backup](B [PASSED]|[Restore]|[BackupMethod=full]|[Description=The cluster full Restore](B [PASSED]|[Connect]|[ComponentName=clickhouse-5lm]|[Description=Connect to the cluster](B [PASSED]|[Delete Restore Cluster]|[BackupMethod=full]|[Description=Delete the full restore cluster](B [PASSED]|[Delete]|[-]|[Description=Delete the cluster](B [END]