bash test/kbcli/test_kbcli_0.9.sh --type 15 --version 0.9.5 --generate-output true --chaos-mesh true --drain-node true --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-nluhk ` `kubectl create namespace ns-nluhk` namespace/ns-nluhk created create namespace ns-nluhk done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "0.9" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v0.9.5-beta.8` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 32.1M 100 32.1M 0 0 92.4M 0 --:--:-- --:--:-- --:--:-- 92.4M kbcli installed successfully. Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v0.9.5-beta.8 done Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Kubernetes Env: v1.32.6 POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default kubeblocks version is:0.9.5 skip upgrade kubeblocks Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed set component name:etcd set component version set component version:etcd set service versions:3.5.15,3.5.6,3.6.1 set service versions sorted:3.5.6,3.5.15,3.6.1 no cluster version found unsupported component definition REPORT_COUNT 0:0 set replicas first:3,3.5.6|3,3.5.15|3,3.6.1 set replicas third:3,3.5.15 set replicas fourth:3,3.5.6 set minimum cmpv service version set minimum cmpv service version replicas:3,3.5.6 REPORT_COUNT:1 CLUSTER_TOPOLOGY: Error from server (NotFound): clusterdefinitions.apps.kubeblocks.io "etcd" not found Not found topology in cluster definition etcd LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 1 No resources found in ns-nluhk namespace. termination_policy:Halt create 3 replica Halt etcd cluster check component definition set component definition by component version check cmpd by labels set component definition2: etcd by component version:etcd apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: etcd-hpbmma namespace: ns-nluhk spec: terminationPolicy: Halt componentSpecs: - name: etcd componentDef: etcd serviceVersion: 3.5.6 replicas: 3 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi services: - name: client serviceName: client spec: type: NodePort ports: - port: 2379 targetPort: 2379 componentSelector: etcd roleSelector: leader `kubectl apply -f test_create_etcd-hpbmma.yaml` cluster.apps.kubeblocks.io/etcd-hpbmma created apply test_create_etcd-hpbmma.yaml Success `rm -rf test_create_etcd-hpbmma.yaml` check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Sep 01,2025 11:18 UTC+0800 cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=etcd-hpbmma` No resources found in ns-nluhk namespace. Not found cluster secret DB_USERNAME:;DB_PASSWORD:;DB_PORT:2379;DB_DATABASE: There is no password in Type: 15. describe cluster `kbcli cluster describe etcd-hpbmma --namespace ns-nluhk ` Name: etcd-hpbmma Created Time: Sep 01,2025 11:18 UTC+0800 NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY ns-nluhk Running Halt Endpoints: COMPONENT MODE INTERNAL EXTERNAL Topology: COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME etcd etcd-hpbmma-etcd-0 leader Running 0 aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd etcd-hpbmma-etcd-1 follower Running 0 aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd etcd-hpbmma-etcd-2 follower Running 0 aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 Resources Allocation: COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS etcd false 100m / 100m 512Mi / 512Mi data:1Gi default Images: COMPONENT TYPE IMAGE etcd docker.io/apecloud/etcd:v3.5.6 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-nluhk etcd-hpbmma `kbcli cluster label etcd-hpbmma app.kubernetes.io/instance- --namespace ns-nluhk ` label "app.kubernetes.io/instance" not found. `kbcli cluster label etcd-hpbmma app.kubernetes.io/instance=etcd-hpbmma --namespace ns-nluhk ` `kbcli cluster label etcd-hpbmma --list --namespace ns-nluhk ` NAME NAMESPACE LABELS etcd-hpbmma ns-nluhk app.kubernetes.io/instance=etcd-hpbmma label cluster app.kubernetes.io/instance=etcd-hpbmma Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=etcd-hpbmma --namespace ns-nluhk ` `kbcli cluster label etcd-hpbmma --list --namespace ns-nluhk ` NAME NAMESPACE LABELS etcd-hpbmma ns-nluhk app.kubernetes.io/instance=etcd-hpbmma case.name=kbcli.test1 label cluster case.name=kbcli.test1 Success `kbcli cluster label etcd-hpbmma case.name=kbcli.test2 --overwrite --namespace ns-nluhk ` `kbcli cluster label etcd-hpbmma --list --namespace ns-nluhk ` NAME NAMESPACE LABELS etcd-hpbmma ns-nluhk app.kubernetes.io/instance=etcd-hpbmma case.name=kbcli.test2 label cluster case.name=kbcli.test2 Success `kbcli cluster label etcd-hpbmma case.name- --namespace ns-nluhk ` `kbcli cluster label etcd-hpbmma --list --namespace ns-nluhk ` NAME NAMESPACE LABELS etcd-hpbmma ns-nluhk app.kubernetes.io/instance=etcd-hpbmma delete cluster label case.name Success cluster connect No resources found in ns-nluhk namespace. `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 member list" | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` Defaulted container "etcd" out of: etcd, lorry, inject-bash (init), init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file 633a9a7a806106b4, started, etcd-hpbmma-etcd-1, http://etcd-hpbmma-etcd-1.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380, http://etcd-hpbmma-etcd-1.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2379, false 9a11f373ea4feb3b, started, etcd-hpbmma-etcd-0, http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380, http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2379, false aa0af6056900ee9c, started, etcd-hpbmma-etcd-2, http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380, http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2379, false connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-etcd-hpbmma" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-etcd-hpbmma --namespace ns-nluhk ` Error from server (NotFound): pods "test-db-client-executionloop-etcd-hpbmma" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-etcd-hpbmma" not found `kubectl get secrets -l app.kubernetes.io/instance=etcd-hpbmma` No resources found in ns-nluhk namespace. Not found cluster secret DB_USERNAME:;DB_PASSWORD:;DB_PORT:2379;DB_DATABASE: No resources found in ns-nluhk namespace. apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-etcd-hpbmma namespace: ns-nluhk spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "etcd-hpbmma-client.ns-nluhk.svc.cluster.local" - "--user" - "" - "--password" - "" - "--port" - "2379" - "--dbtype" - "etcd" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-etcd-hpbmma.yaml` pod/test-db-client-executionloop-etcd-hpbmma created apply test-db-client-executionloop-etcd-hpbmma.yaml Success `rm -rf test-db-client-executionloop-etcd-hpbmma.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 10s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 16s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 21s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 27s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 32s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 38s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 43s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 48s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 54s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 1/1 Running 0 59s check pod test-db-client-executionloop-etcd-hpbmma status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-etcd-hpbmma 0/1 Completed 0 65s check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done --host etcd-hpbmma-client.ns-nluhk.svc.cluster.local --user --password --port 2379 --dbtype etcd --test executionloop --duration 60 --interval 1 Using no auth SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] Execution loop start: Get key prefix:executions_loop_key Execution loop start:PUT:executions_loop_key_1:executions_loop_value_1 [ 1s ] executions total: 60 successful: 60 failed: 0 disconnect: 0 [ 2s ] executions total: 132 successful: 132 failed: 0 disconnect: 0 [ 3s ] executions total: 209 successful: 209 failed: 0 disconnect: 0 [ 4s ] executions total: 288 successful: 288 failed: 0 disconnect: 0 [ 5s ] executions total: 385 successful: 385 failed: 0 disconnect: 0 [ 6s ] executions total: 469 successful: 469 failed: 0 disconnect: 0 [ 7s ] executions total: 559 successful: 559 failed: 0 disconnect: 0 [ 8s ] executions total: 641 successful: 641 failed: 0 disconnect: 0 [ 9s ] executions total: 728 successful: 728 failed: 0 disconnect: 0 [ 10s ] executions total: 821 successful: 821 failed: 0 disconnect: 0 [ 11s ] executions total: 907 successful: 907 failed: 0 disconnect: 0 [ 12s ] executions total: 996 successful: 996 failed: 0 disconnect: 0 [ 13s ] executions total: 1102 successful: 1102 failed: 0 disconnect: 0 [ 14s ] executions total: 1187 successful: 1187 failed: 0 disconnect: 0 [ 15s ] executions total: 1278 successful: 1278 failed: 0 disconnect: 0 [ 16s ] executions total: 1383 successful: 1383 failed: 0 disconnect: 0 [ 17s ] executions total: 1473 successful: 1473 failed: 0 disconnect: 0 [ 18s ] executions total: 1547 successful: 1547 failed: 0 disconnect: 0 [ 19s ] executions total: 1647 successful: 1647 failed: 0 disconnect: 0 [ 20s ] executions total: 1741 successful: 1741 failed: 0 disconnect: 0 [ 21s ] executions total: 1838 successful: 1838 failed: 0 disconnect: 0 [ 22s ] executions total: 1934 successful: 1934 failed: 0 disconnect: 0 [ 23s ] executions total: 2028 successful: 2028 failed: 0 disconnect: 0 [ 24s ] executions total: 2126 successful: 2126 failed: 0 disconnect: 0 [ 25s ] executions total: 2229 successful: 2229 failed: 0 disconnect: 0 [ 26s ] executions total: 2312 successful: 2312 failed: 0 disconnect: 0 [ 27s ] executions total: 2407 successful: 2407 failed: 0 disconnect: 0 [ 28s ] executions total: 2500 successful: 2500 failed: 0 disconnect: 0 [ 29s ] executions total: 2596 successful: 2596 failed: 0 disconnect: 0 [ 30s ] executions total: 2702 successful: 2702 failed: 0 disconnect: 0 [ 31s ] executions total: 2791 successful: 2791 failed: 0 disconnect: 0 [ 32s ] executions total: 2883 successful: 2883 failed: 0 disconnect: 0 [ 33s ] executions total: 2976 successful: 2976 failed: 0 disconnect: 0 [ 34s ] executions total: 3067 successful: 3067 failed: 0 disconnect: 0 [ 35s ] executions total: 3141 successful: 3141 failed: 0 disconnect: 0 [ 36s ] executions total: 3227 successful: 3227 failed: 0 disconnect: 0 [ 37s ] executions total: 3327 successful: 3327 failed: 0 disconnect: 0 [ 38s ] executions total: 3419 successful: 3419 failed: 0 disconnect: 0 [ 39s ] executions total: 3504 successful: 3504 failed: 0 disconnect: 0 [ 40s ] executions total: 3599 successful: 3599 failed: 0 disconnect: 0 [ 41s ] executions total: 3686 successful: 3686 failed: 0 disconnect: 0 [ 42s ] executions total: 3776 successful: 3776 failed: 0 disconnect: 0 [ 43s ] executions total: 3873 successful: 3873 failed: 0 disconnect: 0 [ 44s ] executions total: 3934 successful: 3934 failed: 0 disconnect: 0 [ 45s ] executions total: 4029 successful: 4029 failed: 0 disconnect: 0 [ 46s ] executions total: 4129 successful: 4129 failed: 0 disconnect: 0 [ 47s ] executions total: 4229 successful: 4229 failed: 0 disconnect: 0 [ 48s ] executions total: 4327 successful: 4327 failed: 0 disconnect: 0 [ 49s ] executions total: 4424 successful: 4424 failed: 0 disconnect: 0 [ 50s ] executions total: 4522 successful: 4522 failed: 0 disconnect: 0 [ 51s ] executions total: 4606 successful: 4606 failed: 0 disconnect: 0 [ 52s ] executions total: 4678 successful: 4678 failed: 0 disconnect: 0 [ 53s ] executions total: 4771 successful: 4771 failed: 0 disconnect: 0 [ 54s ] executions total: 4860 successful: 4860 failed: 0 disconnect: 0 [ 55s ] executions total: 4939 successful: 4939 failed: 0 disconnect: 0 [ 56s ] executions total: 5016 successful: 5016 failed: 0 disconnect: 0 [ 57s ] executions total: 5108 successful: 5108 failed: 0 disconnect: 0 [ 58s ] executions total: 5197 successful: 5197 failed: 0 disconnect: 0 [ 59s ] executions total: 5285 successful: 5285 failed: 0 disconnect: 0 [ 60s ] executions total: 5320 successful: 5320 failed: 0 disconnect: 0 Test Result: Total Executions: 5320 Successful Executions: 5320 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: etcd Host: etcd-hpbmma-client.ns-nluhk.svc.cluster.local Port: 2379 Database: Table: User: Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 5320 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-etcd-hpbmma --namespace ns-nluhk ` pod/test-db-client-executionloop-etcd-hpbmma patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-etcd-hpbmma" force deleted test failover networkpartition check cluster status before cluster-failover-networkpartition check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-etcd-hpbmma" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkpartition-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-0 action: partition mode: all target: mode: all selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-1 direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkpartition-etcd-hpbmma.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkpartition-etcd-hpbmma created apply test-chaos-mesh-networkpartition-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-networkpartition-etcd-hpbmma.yaml` networkpartition chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-etcd-hpbmma" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-0 failover networkpartition Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover oom check cluster status before cluster-failover-oom check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-etcd-hpbmma" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-oom-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-0 mode: all stressors: memory: workers: 1 size: "100GB" oomScoreAdj: -1000 duration: 2m `kubectl apply -f test-chaos-mesh-oom-etcd-hpbmma.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-oom-etcd-hpbmma created apply test-chaos-mesh-oom-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-oom-etcd-hpbmma.yaml` check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-oom-etcd-hpbmma" force deleted stresschaos.chaos-mesh.org/test-chaos-mesh-oom-etcd-hpbmma patched check failover pod name failover pod name:etcd-hpbmma-etcd-2 failover oom Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in etcd-hpbmma namespace. `kbcli cluster hscale etcd-hpbmma --auto-approve --force=true --components etcd --replicas 4 --namespace ns-nluhk ` OpsRequest etcd-hpbmma-horizontalscaling-n68lk created successfully, you can view the progress: kbcli cluster describe-ops etcd-hpbmma-horizontalscaling-n68lk -n ns-nluhk check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-horizontalscaling-n68lk ns-nluhk HorizontalScaling etcd-hpbmma etcd Running 0/1 Sep 01,2025 11:31 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 etcd-hpbmma-etcd-3 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:34 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-3 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done No resources found in etcd-hpbmma namespace. check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-horizontalscaling-n68lk ns-nluhk HorizontalScaling etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:31 UTC+0800 check ops status done ops_status:etcd-hpbmma-horizontalscaling-n68lk ns-nluhk HorizontalScaling etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:31 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-horizontalscaling-n68lk --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-horizontalscaling-n68lk patched `kbcli cluster delete-ops --name etcd-hpbmma-horizontalscaling-n68lk --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-horizontalscaling-n68lk deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in etcd-hpbmma namespace. `kbcli cluster hscale etcd-hpbmma --auto-approve --force=true --components etcd --replicas 3 --namespace ns-nluhk ` OpsRequest etcd-hpbmma-horizontalscaling-q8qpz created successfully, you can view the progress: kbcli cluster describe-ops etcd-hpbmma-horizontalscaling-q8qpz -n ns-nluhk check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-horizontalscaling-q8qpz ns-nluhk HorizontalScaling etcd-hpbmma etcd Running 0/1 Sep 01,2025 11:34 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done No resources found in etcd-hpbmma namespace. check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-horizontalscaling-q8qpz ns-nluhk HorizontalScaling etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:34 UTC+0800 check ops status done ops_status:etcd-hpbmma-horizontalscaling-q8qpz ns-nluhk HorizontalScaling etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:34 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-horizontalscaling-q8qpz --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-horizontalscaling-q8qpz patched `kbcli cluster delete-ops --name etcd-hpbmma-horizontalscaling-q8qpz --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-horizontalscaling-q8qpz deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test switchover cluster promote check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster promote etcd-hpbmma --auto-approve --force=true --component etcd --namespace ns-nluhk ` component:etcd OpsRequest etcd-hpbmma-switchover-6gmc4 created successfully, you can view the progress: kbcli cluster describe-ops etcd-hpbmma-switchover-6gmc4 -n ns-nluhk check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-switchover-6gmc4 ns-nluhk Switchover etcd-hpbmma etcd Running 0/1 Sep 01,2025 11:35 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-1;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash` check cluster connect done check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-switchover-6gmc4 ns-nluhk Switchover etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:35 UTC+0800 check ops status done ops_status:etcd-hpbmma-switchover-6gmc4 ns-nluhk Switchover etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:35 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-switchover-6gmc4 --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-switchover-6gmc4 patched `kbcli cluster delete-ops --name etcd-hpbmma-switchover-6gmc4 --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-switchover-6gmc4 deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success switchover pod:etcd-hpbmma-etcd-1 switchover success test failover networkbandwidthover check cluster status before cluster-failover-networkbandwidthover check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-etcd-hpbmma" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkbandwidthover-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-1 action: bandwidth mode: all bandwidth: rate: '1bps' limit: 20971520 buffer: 10000 duration: 2m `kubectl apply -f test-chaos-mesh-networkbandwidthover-etcd-hpbmma.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkbandwidthover-etcd-hpbmma created apply test-chaos-mesh-networkbandwidthover-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-networkbandwidthover-etcd-hpbmma.yaml` networkbandwidthover chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-etcd-hpbmma" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-0 failover networkbandwidthover Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover networkduplicate check cluster status before cluster-failover-networkduplicate check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-etcd-hpbmma" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkduplicate-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-0 mode: all action: duplicate duplicate: duplicate: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkduplicate-etcd-hpbmma.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkduplicate-etcd-hpbmma created apply test-chaos-mesh-networkduplicate-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-networkduplicate-etcd-hpbmma.yaml` networkduplicate chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-etcd-hpbmma" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-0 failover networkduplicate Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover podkill check cluster status before cluster-failover-podkill check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podkill-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podkill-etcd-hpbmma" not found Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podkill-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podkill-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-0 mode: all action: pod-kill `kubectl apply -f test-chaos-mesh-podkill-etcd-hpbmma.yaml` podchaos.chaos-mesh.org/test-chaos-mesh-podkill-etcd-hpbmma created apply test-chaos-mesh-podkill-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-podkill-etcd-hpbmma.yaml` check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "etcd-hpbmma-etcd-0" force deleted cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:43 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podkill-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. podchaos.chaos-mesh.org "test-chaos-mesh-podkill-etcd-hpbmma" force deleted podchaos.chaos-mesh.org/test-chaos-mesh-podkill-etcd-hpbmma patched check failover pod name failover pod name:etcd-hpbmma-etcd-2 failover podkill Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success cluster hscale offline instances apiVersion: apps.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: etcd-hpbmma-hscaleoffinstance- labels: app.kubernetes.io/instance: etcd-hpbmma app.kubernetes.io/managed-by: kubeblocks namespace: ns-nluhk spec: type: HorizontalScaling clusterName: etcd-hpbmma force: true horizontalScaling: - componentName: etcd scaleIn: onlineInstancesToOffline: - etcd-hpbmma-etcd-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_etcd-hpbmma.yaml` opsrequest.apps.kubeblocks.io/etcd-hpbmma-hscaleoffinstance-wvzpz created create test_ops_cluster_etcd-hpbmma.yaml Success `rm -rf test_ops_cluster_etcd-hpbmma.yaml` check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-hscaleoffinstance-wvzpz ns-nluhk HorizontalScaling etcd-hpbmma etcd Running 0/1 Sep 01,2025 11:44 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-hscaleoffinstance-wvzpz ns-nluhk HorizontalScaling etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:44 UTC+0800 check ops status done ops_status:etcd-hpbmma-hscaleoffinstance-wvzpz ns-nluhk HorizontalScaling etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:44 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-hscaleoffinstance-wvzpz --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-hscaleoffinstance-wvzpz patched `kbcli cluster delete-ops --name etcd-hpbmma-hscaleoffinstance-wvzpz --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-hscaleoffinstance-wvzpz deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success cluster hscale online instances apiVersion: apps.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: etcd-hpbmma-hscaleoninstance- labels: app.kubernetes.io/instance: etcd-hpbmma app.kubernetes.io/managed-by: kubeblocks namespace: ns-nluhk spec: type: HorizontalScaling clusterName: etcd-hpbmma force: true horizontalScaling: - componentName: etcd scaleOut: offlineInstancesToOnline: - etcd-hpbmma-etcd-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_etcd-hpbmma.yaml` opsrequest.apps.kubeblocks.io/etcd-hpbmma-hscaleoninstance-lrz68 created create test_ops_cluster_etcd-hpbmma.yaml Success `rm -rf test_ops_cluster_etcd-hpbmma.yaml` check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-hscaleoninstance-lrz68 ns-nluhk HorizontalScaling etcd-hpbmma etcd Running -/- Sep 01,2025 11:45 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:45 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-hscaleoninstance-lrz68 ns-nluhk HorizontalScaling etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:45 UTC+0800 check ops status done ops_status:etcd-hpbmma-hscaleoninstance-lrz68 ns-nluhk HorizontalScaling etcd-hpbmma etcd Succeed 1/1 Sep 01,2025 11:45 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-hscaleoninstance-lrz68 --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-hscaleoninstance-lrz68 patched `kbcli cluster delete-ops --name etcd-hpbmma-hscaleoninstance-lrz68 --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-hscaleoninstance-lrz68 deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop etcd-hpbmma --auto-approve --force=true --namespace ns-nluhk ` OpsRequest etcd-hpbmma-stop-6ztpv created successfully, you can view the progress: kbcli cluster describe-ops etcd-hpbmma-stop-6ztpv -n ns-nluhk check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-stop-6ztpv ns-nluhk Stop etcd-hpbmma etcd Running 0/3 Sep 01,2025 11:46 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Stopped Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-stop-6ztpv ns-nluhk Stop etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 11:46 UTC+0800 check ops status done ops_status:etcd-hpbmma-stop-6ztpv ns-nluhk Stop etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 11:46 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-stop-6ztpv --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-stop-6ztpv patched `kbcli cluster delete-ops --name etcd-hpbmma-stop-6ztpv --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-stop-6ztpv deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start etcd-hpbmma --force=true --namespace ns-nluhk ` OpsRequest etcd-hpbmma-start-4m6hn created successfully, you can view the progress: kbcli cluster describe-ops etcd-hpbmma-start-4m6hn -n ns-nluhk check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-start-4m6hn ns-nluhk Start etcd-hpbmma etcd Running 0/3 Sep 01,2025 11:46 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:46 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:47 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-start-4m6hn ns-nluhk Start etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 11:46 UTC+0800 check ops status done ops_status:etcd-hpbmma-start-4m6hn ns-nluhk Start etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 11:46 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-start-4m6hn --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-start-4m6hn patched `kbcli cluster delete-ops --name etcd-hpbmma-start-4m6hn --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-start-4m6hn deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover podfailure check cluster status before cluster-failover-podfailure check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-etcd-hpbmma" not found Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podfailure-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-0 mode: all action: pod-failure duration: 2m `kubectl apply -f test-chaos-mesh-podfailure-etcd-hpbmma.yaml` podchaos.chaos-mesh.org/test-chaos-mesh-podfailure-etcd-hpbmma created apply test-chaos-mesh-podfailure-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-podfailure-etcd-hpbmma.yaml` podfailure chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Abnormal Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:46 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:47 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-1;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-etcd-hpbmma" force deleted Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-1 failover podfailure Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover drainnode check cluster status before cluster-failover-drainnode check cluster status done cluster_status:Running check node drain check node drain success kubectl get pod etcd-hpbmma-etcd-1 --namespace ns-nluhk -o jsonpath='***.spec.nodeName***' get node name:aks-cicdamdpool-25950949-vmss000004 success check if multiple pods are on the same node kubectl get pod etcd-hpbmma-etcd-0 --namespace ns-nluhk -o jsonpath='***.spec.nodeName***' get node name:aks-cicdamdpool-25950949-vmss000007 success kubectl get pod etcd-hpbmma-etcd-2 --namespace ns-nluhk -o jsonpath='***.spec.nodeName***' get node name:aks-cicdamdpool-25950949-vmss000007 success kubectl drain aks-cicdamdpool-25950949-vmss000004 --delete-emptydir-data --ignore-daemonsets --force --grace-period 0 --timeout 60s node/aks-cicdamdpool-25950949-vmss000004 cordoned Warning: ignoring DaemonSet-managed Pods: chaos-mesh/chaos-daemon-5b98q, kb-ddjsn/kb-addon-apecloud-otel-collector-kgsss, kube-system/azure-cns-886zw, kube-system/azure-ip-masq-agent-bx52t, kube-system/cloud-node-manager-h647p, kube-system/csi-azuredisk-node-46jsf, kube-system/csi-azurefile-node-vq6sg, kube-system/kube-proxy-5qbx8 evicting pod kb-ddjsn/keda-operator-68b7db459f-nj299 evicting pod chaos-mesh/chaos-dns-server-cd7c67cc5-vh6k6 evicting pod chaos-mesh/chaos-controller-manager-5f8b869c9c-79h4t evicting pod ns-ysucn/qdrant-jnynhk-qdrant-1 evicting pod kb-ddjsn/keda-operator-metrics-apiserver-57c48fcb88-fwx6m evicting pod chaos-mesh/chaos-dashboard-7b866cc765-r9chk evicting pod kb-ddjsn/kubeblocks-98d6b6779-g86fk evicting pod kb-ddjsn/kubeblocks-dataprotection-65fc58f88c-7n65p evicting pod kb-ddjsn/kbcli-test-minio-85784f94b-7z8kr evicting pod kb-ddjsn/kb-addon-prometheus-server-0 evicting pod chaos-mesh/chaos-controller-manager-5f8b869c9c-76n8g evicting pod kb-ddjsn/kb-addon-grafana-94cc97fcf-zj45v evicting pod kube-system/coredns-6f776c8fb5-v9fx5 evicting pod kb-ddjsn/keda-admission-webhooks-57ffdc99c4-t2fjh evicting pod kube-system/konnectivity-agent-55bb5559cb-d22tm evicting pod ns-hswmj/tdsql-ktnkps-zookeeper-0 evicting pod ns-aozme/pulsar-tggykc-zookeeper-0 evicting pod ns-nluhk/etcd-hpbmma-etcd-1 evicting pod ns-aozme/pulsar-tggykc-pulsar-proxy-0 evicting pod ns-oxzrz/greptime-jzuuke-datanode-1 evicting pod ns-bkwga/obce-darrri-ob-bundle-0 evicting pod ns-oxzrz/greptime-jzuuke-etcd-1 evicting pod ns-hswmj/tdsql-ktnkps-scheduler-0 evicting pod ns-oxzrz/greptime-jzuuke-frontend-1 evicting pod ns-rqwpo/nebula-vrphvy-storaged-0 evicting pod ns-rqwpo/nebula-vrphvy-storaged-1 evicting pod ns-aozme/pulsar-tggykc-bookies-0 evicting pod ns-aozme/pulsar-tggykc-pulsar-broker-1 evicting pod ns-rqwpo/nebula-vrphvy-metad-0 pod/chaos-controller-manager-5f8b869c9c-79h4t evicted pod/pulsar-tggykc-zookeeper-0 evicted pod/tdsql-ktnkps-zookeeper-0 evicted pod/greptime-jzuuke-etcd-1 evicted pod/kb-addon-prometheus-server-0 evicted pod/etcd-hpbmma-etcd-1 evicted pod/tdsql-ktnkps-scheduler-0 evicted pod/kbcli-test-minio-85784f94b-7z8kr evicted pod/konnectivity-agent-55bb5559cb-d22tm evicted pod/greptime-jzuuke-frontend-1 evicted pod/chaos-controller-manager-5f8b869c9c-76n8g evicted pod/kb-addon-grafana-94cc97fcf-zj45v evicted pod/nebula-vrphvy-storaged-0 evicted I0901 11:55:16.743009 9747 request.go:697] Waited for 1.000166839s due to client-side throttling, not priority and fairness, request: POST:https://still-monkey-k8s-5lhknbaw.hcp.eastus.azmk8s.io:443/api/v1/namespaces/ns-rqwpo/pods/nebula-vrphvy-storaged-1/eviction pod/coredns-6f776c8fb5-v9fx5 evicted pod/nebula-vrphvy-storaged-1 evicted pod/keda-admission-webhooks-57ffdc99c4-t2fjh evicted pod/pulsar-tggykc-pulsar-broker-1 evicted pod/nebula-vrphvy-metad-0 evicted pod/pulsar-tggykc-bookies-0 evicted pod/keda-operator-metrics-apiserver-57c48fcb88-fwx6m evicted pod/greptime-jzuuke-datanode-1 evicted pod/qdrant-jnynhk-qdrant-1 evicted pod/pulsar-tggykc-pulsar-proxy-0 evicted pod/chaos-dns-server-cd7c67cc5-vh6k6 evicted pod/chaos-dashboard-7b866cc765-r9chk evicted pod/kubeblocks-98d6b6779-g86fk evicted pod/keda-operator-68b7db459f-nj299 evicted pod/obce-darrri-ob-bundle-0 evicted pod/kubeblocks-dataprotection-65fc58f88c-7n65p evicted node/aks-cicdamdpool-25950949-vmss000004 drained kubectl uncordon aks-cicdamdpool-25950949-vmss000004 node/aks-cicdamdpool-25950949-vmss000004 uncordoned check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "etcd-hpbmma-etcd-1" force deleted cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:46 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:58 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done check failover pod name failover pod name:etcd-hpbmma-etcd-2 failover drainnode Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover dnsrandom check cluster status before cluster-failover-dnsrandom check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-etcd-hpbmma" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnsrandom-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-2 mode: all action: random duration: 2m `kubectl apply -f test-chaos-mesh-dnsrandom-etcd-hpbmma.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnsrandom-etcd-hpbmma created apply test-chaos-mesh-dnsrandom-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-dnsrandom-etcd-hpbmma.yaml` dnsrandom chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:46 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:58 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-etcd-hpbmma" force deleted dnschaos.chaos-mesh.org/test-chaos-mesh-dnsrandom-etcd-hpbmma patched check failover pod name failover pod name:etcd-hpbmma-etcd-2 failover dnsrandom Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart etcd-hpbmma --auto-approve --force=true --namespace ns-nluhk ` OpsRequest etcd-hpbmma-restart-dqlj2 created successfully, you can view the progress: kbcli cluster describe-ops etcd-hpbmma-restart-dqlj2 -n ns-nluhk check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-restart-dqlj2 ns-nluhk Restart etcd-hpbmma etcd Running 0/3 Sep 01,2025 12:02 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:03 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:02 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:04 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-1;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash` check cluster connect done check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-restart-dqlj2 ns-nluhk Restart etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 12:02 UTC+0800 check ops status done ops_status:etcd-hpbmma-restart-dqlj2 ns-nluhk Restart etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 12:02 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-restart-dqlj2 --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-restart-dqlj2 patched `kbcli cluster delete-ops --name etcd-hpbmma-restart-dqlj2 --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-restart-dqlj2 deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover networkdelayover check cluster status before cluster-failover-networkdelayover check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelayover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelayover-etcd-hpbmma" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelayover-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkdelayover-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-1 mode: all action: delay delay: latency: 2000ms correlation: '100' jitter: 0ms direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkdelayover-etcd-hpbmma.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkdelayover-etcd-hpbmma created apply test-chaos-mesh-networkdelayover-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-networkdelayover-etcd-hpbmma.yaml` networkdelayover chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:03 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:02 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:04 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelayover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelayover-etcd-hpbmma" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelayover-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-0 failover networkdelayover Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover fullcpuover check cluster status before cluster-failover-fullcpuover check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpuover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpuover-etcd-hpbmma" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpuover-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-fullcpuover-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-0 mode: all stressors: cpu: workers: 100 load: 100 duration: 2m `kubectl apply -f test-chaos-mesh-fullcpuover-etcd-hpbmma.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-fullcpuover-etcd-hpbmma created apply test-chaos-mesh-fullcpuover-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-fullcpuover-etcd-hpbmma.yaml` fullcpuover chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:03 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:02 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:04 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-1;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpuover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpuover-etcd-hpbmma" force deleted Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpuover-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-1 failover fullcpuover Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover networklossover check cluster status before cluster-failover-networklossover check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-etcd-hpbmma" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networklossover-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-1 mode: all action: loss loss: loss: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networklossover-etcd-hpbmma.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networklossover-etcd-hpbmma created apply test-chaos-mesh-networklossover-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-networklossover-etcd-hpbmma.yaml` networklossover chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:03 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:02 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:04 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-etcd-hpbmma" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-0 failover networklossover Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover kill1 check cluster status before cluster-failover-kill1 check cluster status done cluster_status:Running check node drain check node drain success `kill 1` Defaulted container "etcd" out of: etcd, lorry, inject-bash (init), init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file exec return message: check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:03 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:02 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:04 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done check failover pod name failover pod name:etcd-hpbmma-etcd-2 failover kill1 Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover timeoffset check cluster status before cluster-failover-timeoffset check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-etcd-hpbmma" not found Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: TimeChaos metadata: name: test-chaos-mesh-timeoffset-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-2 mode: all timeOffset: '-10m' clockIds: - CLOCK_REALTIME duration: 2m `kubectl apply -f test-chaos-mesh-timeoffset-etcd-hpbmma.yaml` timechaos.chaos-mesh.org/test-chaos-mesh-timeoffset-etcd-hpbmma created apply test-chaos-mesh-timeoffset-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-timeoffset-etcd-hpbmma.yaml` timeoffset chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:03 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:02 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:04 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-etcd-hpbmma" force deleted Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-2 failover timeoffset Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale etcd-hpbmma --auto-approve --force=true --components etcd --cpu 200m --memory 0.6Gi --namespace ns-nluhk ` OpsRequest etcd-hpbmma-verticalscaling-ngd6d created successfully, you can view the progress: kbcli cluster describe-ops etcd-hpbmma-verticalscaling-ngd6d -n ns-nluhk check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-verticalscaling-ngd6d ns-nluhk VerticalScaling etcd-hpbmma etcd Running 0/3 Sep 01,2025 12:14 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:15 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running leader 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:14 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:15 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-1;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash` check cluster connect done check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-verticalscaling-ngd6d ns-nluhk VerticalScaling etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 12:14 UTC+0800 check ops status done ops_status:etcd-hpbmma-verticalscaling-ngd6d ns-nluhk VerticalScaling etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 12:14 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-verticalscaling-ngd6d --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-verticalscaling-ngd6d patched `kbcli cluster delete-ops --name etcd-hpbmma-verticalscaling-ngd6d --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-verticalscaling-ngd6d deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success `kubectl get pvc -l app.kubernetes.io/instance=etcd-hpbmma,apps.kubeblocks.io/component-name=etcd,apps.kubeblocks.io/vct-name=data --namespace ns-nluhk ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in etcd-hpbmma namespace. `kbcli cluster volume-expand etcd-hpbmma --auto-approve --force=true --components etcd --volume-claim-templates data --storage 2Gi --namespace ns-nluhk ` OpsRequest etcd-hpbmma-volumeexpansion-nt994 created successfully, you can view the progress: kbcli cluster describe-ops etcd-hpbmma-volumeexpansion-nt994 -n ns-nluhk check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-volumeexpansion-nt994 ns-nluhk VolumeExpansion etcd-hpbmma etcd Running 0/3 Sep 01,2025 12:16 UTC+0800 check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running leader 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:15 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-2;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-1 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash` check cluster connect done No resources found in etcd-hpbmma namespace. check ops status `kbcli cluster list-ops etcd-hpbmma --status all --namespace ns-nluhk ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME etcd-hpbmma-volumeexpansion-nt994 ns-nluhk VolumeExpansion etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 12:16 UTC+0800 check ops status done ops_status:etcd-hpbmma-volumeexpansion-nt994 ns-nluhk VolumeExpansion etcd-hpbmma etcd Succeed 3/3 Sep 01,2025 12:16 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests etcd-hpbmma-volumeexpansion-nt994 --namespace ns-nluhk ` opsrequest.apps.kubeblocks.io/etcd-hpbmma-volumeexpansion-nt994 patched `kbcli cluster delete-ops --name etcd-hpbmma-volumeexpansion-nt994 --force --auto-approve --namespace ns-nluhk ` OpsRequest etcd-hpbmma-volumeexpansion-nt994 deleted No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-2 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover connectionstressover check cluster status before cluster-failover-connectionstressover check cluster status done cluster_status:Running check node drain check node drain success Error from server (NotFound): pods "test-db-client-connectionstressover-etcd-hpbmma" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstressover-etcd-hpbmma --namespace ns-nluhk ` Error from server (NotFound): pods "test-db-client-connectionstressover-etcd-hpbmma" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstressover-etcd-hpbmma" not found `kubectl get secrets -l app.kubernetes.io/instance=etcd-hpbmma` No resources found in ns-nluhk namespace. Not found cluster secret DB_USERNAME:;DB_PASSWORD:;DB_PORT:2379;DB_DATABASE: No resources found in ns-nluhk namespace. apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstressover-etcd-hpbmma namespace: ns-nluhk spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "etcd-hpbmma-client.ns-nluhk.svc.cluster.local" - "--user" - "" - "--password" - "" - "--port" - "2379" - "--database" - "" - "--dbtype" - "etcd" - "--test" - "connectionstress" - "--connections" - "5000" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstressover-etcd-hpbmma.yaml` pod/test-db-client-connectionstressover-etcd-hpbmma created apply test-db-client-connectionstressover-etcd-hpbmma.yaml Success `rm -rf test-db-client-connectionstressover-etcd-hpbmma.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 10s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 16s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 21s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 27s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 32s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 38s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 43s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 48s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 54s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 59s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 65s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 70s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 76s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 81s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 1/1 Running 0 87s check pod test-db-client-connectionstressover-etcd-hpbmma status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstressover-etcd-hpbmma 0/1 Completed 0 92s check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running leader 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:15 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-0;follower: etcd-hpbmma-etcd-1 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done Failed to establish connection 4540: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4541: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4542: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4543: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4544: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4545: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4546: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4547: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4548: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4549: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4550: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4551: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4552: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4553: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4554: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4555: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4556: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4557: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4558: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4559: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4560: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4561: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4562: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4563: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4564: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4565: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4566: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4567: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4568: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4569: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4570: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4571: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4572: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4573: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4574: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4575: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4576: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4577: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4578: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4579: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4580: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4581: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4582: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4583: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4584: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4585: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4586: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4587: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4588: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4589: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4590: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4591: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4592: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4593: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4594: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4595: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4596: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4597: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4598: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4599: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4600: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4601: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4602: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4603: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4604: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4605: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4606: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4607: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4608: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4609: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4610: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4611: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4612: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4613: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4614: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4615: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4616: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4617: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4618: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4619: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4620: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4621: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4622: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Failed to establish connection 4623: Failed to execute command:java.util.concurrent.ExecutionException: shade.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception Connection stress test completed: Successful connections: 5000 Duration: 29 seconds Connection Information: Database Type: etcd Host: etcd-hpbmma-client.ns-nluhk.svc.cluster.local Port: 2379 Database: Table: User: Org: Access Mode: mysql Test Type: connectionstress Connection Count: 5000 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstressover-etcd-hpbmma --namespace ns-nluhk ` pod/test-db-client-connectionstressover-etcd-hpbmma patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstressover-etcd-hpbmma" force deleted check failover pod name failover pod name:etcd-hpbmma-etcd-0 failover connectionstressover Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover networkcorruptover check cluster status before cluster-failover-networkcorruptover check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-etcd-hpbmma" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkcorruptover-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-0 mode: all action: corrupt corrupt: corrupt: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkcorruptover-etcd-hpbmma.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkcorruptover-etcd-hpbmma created apply test-chaos-mesh-networkcorruptover-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-networkcorruptover-etcd-hpbmma.yaml` networkcorruptover chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running leader 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:15 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-1;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-etcd-hpbmma" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-1 failover networkcorruptover Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success test failover dnserror check cluster status before cluster-failover-dnserror check cluster status done cluster_status:Running check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-etcd-hpbmma" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-etcd-hpbmma" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnserror-etcd-hpbmma namespace: ns-nluhk spec: selector: namespaces: - ns-nluhk labelSelectors: apps.kubeblocks.io/pod-name: etcd-hpbmma-etcd-1 mode: all action: error duration: 2m `kubectl apply -f test-chaos-mesh-dnserror-etcd-hpbmma.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnserror-etcd-hpbmma created apply test-chaos-mesh-dnserror-etcd-hpbmma.yaml Success `rm -rf test-chaos-mesh-dnserror-etcd-hpbmma.yaml` dnserror chaos test waiting 120 seconds check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running leader 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:15 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-1;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-etcd-hpbmma --namespace ns-nluhk ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-etcd-hpbmma" force deleted Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-etcd-hpbmma" not found check failover pod name failover pod name:etcd-hpbmma-etcd-1 failover dnserror Success No resources found in ns-nluhk namespace. check db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash ` check db_client batch data Success No resources found in ns-nluhk namespace. check readonly db_client batch data count `echo "etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 get --prefix \"executions_loop_key\" --keys-only | (grep executions_loop_key || true) | wc -l " | kubectl exec -it etcd-hpbmma-etcd-0 --namespace ns-nluhk -- bash ` check readonly db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update etcd-hpbmma --termination-policy=WipeOut --namespace ns-nluhk ` cluster.apps.kubeblocks.io/etcd-hpbmma updated check cluster status `kbcli cluster list etcd-hpbmma --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma ns-nluhk WipeOut Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=etcd-hpbmma check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-etcd-0 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-1 ns-nluhk etcd-hpbmma etcd Running leader 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 etcd-hpbmma-etcd-2 ns-nluhk etcd-hpbmma etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:15 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-etcd-1;follower: etcd-hpbmma-etcd-0 etcd-hpbmma-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-etcd-1 --namespace ns-nluhk -- bash` check cluster connect done cluster datafile backup `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.name***"` `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.namespace***"` `kubectl get secrets kb-backuprepo-k4qbz -n kb-ddjsn -o jsonpath="***.data.accessKeyId***"` `kubectl get secrets kb-backuprepo-k4qbz -n kb-ddjsn -o jsonpath="***.data.secretAccessKey***"` KUBEBLOCKS NAMESPACE:kb-ddjsn get kubeblocks namespace done `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-ddjsn -o jsonpath="***.items[0].data.root-user***"` `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-ddjsn -o jsonpath="***.items[0].data.root-password***"` minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-ddjsn.svc.cluster.local:9000 list minio bucket kbcli-test `echo 'mc config host add minioserver http://kbcli-test-minio.kb-ddjsn.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-85784f94b-pjm6h --namespace kb-ddjsn -- bash` Unable to use a TTY - input is not a terminal or the right kind of file list minio bucket done default backuprepo:backuprepo-kbcli-test exists `kbcli cluster backup etcd-hpbmma --method datafile --namespace ns-nluhk ` Backup backup-ns-nluhk-etcd-hpbmma-20250901122950 created successfully, you can view the progress: kbcli cluster list-backups --name=backup-ns-nluhk-etcd-hpbmma-20250901122950 -n ns-nluhk check backup status `kbcli cluster list-backups etcd-hpbmma --namespace ns-nluhk ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-nluhk-etcd-hpbmma-20250901122950 ns-nluhk etcd-hpbmma datafile Running Sep 01,2025 12:29 UTC+0800 backup_status:etcd-hpbmma-datafile-Running backup_status:etcd-hpbmma-datafile-Running backup_status:etcd-hpbmma-datafile-Running check backup status done backup_status:backup-ns-nluhk-etcd-hpbmma-20250901122950 ns-nluhk etcd-hpbmma datafile Completed 62567 11s Sep 01,2025 12:29 UTC+0800 Sep 01,2025 12:30 UTC+0800 cluster restore backup Error from server (NotFound): opsrequests.apps.kubeblocks.io "etcd-hpbmma-backup" not found `kbcli cluster describe-backup backup-ns-nluhk-etcd-hpbmma-20250901122950 --namespace ns-nluhk ` Name: backup-ns-nluhk-etcd-hpbmma-20250901122950 Cluster: etcd-hpbmma Namespace: ns-nluhk Spec: Method: datafile Policy Name: etcd-hpbmma-etcd-backup-policy Status: Phase: Completed Total Size: 62567 ActionSet Name: etcd-backup-actionset Repository: backuprepo-kbcli-test Duration: 11s Start Time: Sep 01,2025 12:29 UTC+0800 Completion Time: Sep 01,2025 12:30 UTC+0800 Path: /ns-nluhk/etcd-hpbmma-e609cd02-c965-4561-9e0c-84289be711ea/etcd/backup-ns-nluhk-etcd-hpbmma-20250901122950 Warning Events: `kbcli cluster restore etcd-hpbmma-backup --backup backup-ns-nluhk-etcd-hpbmma-20250901122950 --namespace ns-nluhk ` Cluster etcd-hpbmma-backup created check cluster status `kbcli cluster list etcd-hpbmma-backup --show-labels --namespace ns-nluhk ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcd-hpbmma-backup ns-nluhk WipeOut Sep 01,2025 12:30 UTC+0800 cluster_status: cluster_status: cluster_status: cluster_status: cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcd-hpbmma-backup --namespace ns-nluhk ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcd-hpbmma-backup-etcd-0 ns-nluhk etcd-hpbmma-backup etcd Running leader 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000008/10.224.0.7 Sep 01,2025 12:30 UTC+0800 etcd-hpbmma-backup-etcd-1 ns-nluhk etcd-hpbmma-backup etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:31 UTC+0800 etcd-hpbmma-backup-etcd-2 ns-nluhk etcd-hpbmma-backup etcd Running follower 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:31 UTC+0800 check pod status done check cluster role check cluster role done leader: etcd-hpbmma-backup-etcd-0;follower: etcd-hpbmma-backup-etcd-1 etcd-hpbmma-backup-etcd-2 No resources found in ns-nluhk namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcd-hpbmma-backup-client.ns-nluhk.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcd-hpbmma-backup-etcd-0 --namespace ns-nluhk -- bash` check cluster connect done `kbcli cluster describe-backup backup-ns-nluhk-etcd-hpbmma-20250901122950 --namespace ns-nluhk ` Name: backup-ns-nluhk-etcd-hpbmma-20250901122950 Cluster: etcd-hpbmma Namespace: ns-nluhk Spec: Method: datafile Policy Name: etcd-hpbmma-etcd-backup-policy Status: Phase: Completed Total Size: 62567 ActionSet Name: etcd-backup-actionset Repository: backuprepo-kbcli-test Duration: 11s Start Time: Sep 01,2025 12:29 UTC+0800 Completion Time: Sep 01,2025 12:30 UTC+0800 Path: /ns-nluhk/etcd-hpbmma-e609cd02-c965-4561-9e0c-84289be711ea/etcd/backup-ns-nluhk-etcd-hpbmma-20250901122950 Warning Events: cluster connect No resources found in ns-nluhk namespace. `echo "etcdctl --endpoints=http://etcd-hpbmma-backup-client.ns-nluhk.svc.cluster.local:2379 member list" | kubectl exec -it etcd-hpbmma-backup-etcd-0 --namespace ns-nluhk -- bash ` Defaulted container "etcd" out of: etcd, lorry, inject-bash (init), init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file a5fa426c4594ea16, started, etcd-hpbmma-backup-etcd-2, http://etcd-hpbmma-backup-etcd-2.etcd-hpbmma-backup-etcd-headless.ns-nluhk.svc.cluster.local:2380, http://etcd-hpbmma-backup-etcd-2.etcd-hpbmma-backup-etcd-headless.ns-nluhk.svc.cluster.local:2379, false e80ebe7105227efd, started, etcd-hpbmma-backup-etcd-1, http://etcd-hpbmma-backup-etcd-1.etcd-hpbmma-backup-etcd-headless.ns-nluhk.svc.cluster.local:2380, http://etcd-hpbmma-backup-etcd-1.etcd-hpbmma-backup-etcd-headless.ns-nluhk.svc.cluster.local:2379, false fe6db27857dccb22, started, etcd-hpbmma-backup-etcd-0, http://etcd-hpbmma-backup-etcd-0.etcd-hpbmma-backup-etcd-headless.ns-nluhk.svc.cluster.local:2380, http://etcd-hpbmma-backup-etcd-0.etcd-hpbmma-backup-etcd-headless.ns-nluhk.svc.cluster.local:2379, false connect cluster Success delete cluster etcd-hpbmma-backup `kbcli cluster delete etcd-hpbmma-backup --auto-approve --namespace ns-nluhk ` Cluster etcd-hpbmma-backup deleted pod_info:etcd-hpbmma-backup-etcd-0 2/2 Running 0 89s etcd-hpbmma-backup-etcd-1 2/2 Running 0 52s etcd-hpbmma-backup-etcd-2 2/2 Running 0 32s No resources found in ns-nluhk namespace. delete cluster pod done No resources found in ns-nluhk namespace. check cluster resource non-exist OK: pvc No resources found in ns-nluhk namespace. delete cluster done No resources found in ns-nluhk namespace. No resources found in ns-nluhk namespace. No resources found in ns-nluhk namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-nluhk-etcd-hpbmma-20250901122950 --namespace ns-nluhk ` backup.dataprotection.kubeblocks.io/backup-ns-nluhk-etcd-hpbmma-20250901122950 patched `kbcli cluster delete-backup etcd-hpbmma --name backup-ns-nluhk-etcd-hpbmma-20250901122950 --force --auto-approve --namespace ns-nluhk ` Backup backup-ns-nluhk-etcd-hpbmma-20250901122950 deleted No opsrequests found in ns-nluhk namespace. cluster list-logs `kbcli cluster list-logs etcd-hpbmma --namespace ns-nluhk ` No log files found. You can enable the log feature with the kbcli command below. kbcli cluster update etcd-hpbmma --enable-all-logs=true --namespace ns-nluhk Error from server (NotFound): pods "etcd-hpbmma-etcd-1" not found cluster logs `kbcli cluster logs etcd-hpbmma --tail 30 --namespace ns-nluhk ` Defaulted container "etcd" out of: etcd, lorry, inject-bash (init), init-lorry (init) ***"level":"warn","ts":"2025-09-01T04:28:46.964Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"aa0af6056900ee9c","error":"Get \"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:48.266Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"aa0af6056900ee9c","rtt":"3.361948ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:48.554Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7bd45124eba3d1a","rtt":"759.815286ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:52.964Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version","remote-member-id":"7bd45124eba3d1a","error":"Get \"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:52.964Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"7bd45124eba3d1a","error":"Get \"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:53.267Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"aa0af6056900ee9c","rtt":"3.361948ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:53.555Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7bd45124eba3d1a","rtt":"759.815286ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:54.965Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version","remote-member-id":"aa0af6056900ee9c","error":"Get \"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:54.965Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"aa0af6056900ee9c","error":"Get \"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:28:58.268Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"aa0af6056900ee9c","rtt":"3.361948ms","error":"dial tcp: lookup etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local on 10.0.31.96:53: server misbehaving"*** ***"level":"warn","ts":"2025-09-01T04:28:58.555Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7bd45124eba3d1a","rtt":"759.815286ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:00.966Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version","remote-member-id":"7bd45124eba3d1a","error":"Get \"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:00.966Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"7bd45124eba3d1a","error":"Get \"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:02.966Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version","remote-member-id":"aa0af6056900ee9c","error":"Get \"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:02.966Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"aa0af6056900ee9c","error":"Get \"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:03.268Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"aa0af6056900ee9c","rtt":"3.361948ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:03.556Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7bd45124eba3d1a","rtt":"759.815286ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:08.269Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"aa0af6056900ee9c","rtt":"3.361948ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:08.556Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7bd45124eba3d1a","rtt":"759.815286ms","error":"dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:08.968Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version","remote-member-id":"7bd45124eba3d1a","error":"Get \"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:08.968Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"7bd45124eba3d1a","error":"Get \"http://etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: i/o timeout"*** ***"level":"warn","ts":"2025-09-01T04:29:10.661Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version","remote-member-id":"aa0af6056900ee9c","error":"Get \"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: lookup etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local on 10.0.31.96:53: server misbehaving"*** ***"level":"warn","ts":"2025-09-01T04:29:10.661Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"aa0af6056900ee9c","error":"Get \"http://etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local:2380/version\": dial tcp: lookup etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local on 10.0.31.96:53: server misbehaving"*** ***"level":"info","ts":"2025-09-01T04:29:12.366Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"7bd45124eba3d1a"*** ***"level":"info","ts":"2025-09-01T04:29:12.366Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"633a9a7a806106b4","remote-peer-id":"7bd45124eba3d1a"*** ***"level":"warn","ts":"2025-09-01T04:29:13.269Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"aa0af6056900ee9c","rtt":"3.361948ms","error":"dial tcp: lookup etcd-hpbmma-etcd-2.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local on 10.0.31.96:53: server misbehaving"*** ***"level":"warn","ts":"2025-09-01T04:29:13.556Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7bd45124eba3d1a","rtt":"759.815286ms","error":"dial tcp: lookup etcd-hpbmma-etcd-0.etcd-hpbmma-etcd-headless.ns-nluhk.svc.cluster.local on 10.0.31.96:53: server misbehaving"*** ***"level":"info","ts":"2025-09-01T04:29:53.347Z","caller":"v3rpc/maintenance.go:126","msg":"sending database snapshot to client","total-bytes":614400,"size":"614 kB"*** ***"level":"info","ts":"2025-09-01T04:29:53.353Z","caller":"v3rpc/maintenance.go:166","msg":"sending database sha256 checksum to client","total-bytes":614400,"checksum-size":32*** ***"level":"info","ts":"2025-09-01T04:29:53.353Z","caller":"v3rpc/maintenance.go:175","msg":"successfully sent database snapshot to client","total-bytes":614400,"size":"614 kB","took":"now"*** delete cluster etcd-hpbmma `kbcli cluster delete etcd-hpbmma --auto-approve --namespace ns-nluhk ` Cluster etcd-hpbmma deleted pod_info:etcd-hpbmma-etcd-0 2/2 Running 0 15m etcd-hpbmma-etcd-1 2/2 Running 0 15m etcd-hpbmma-etcd-2 2/2 Running 1 (9m13s ago) 16m No resources found in ns-nluhk namespace. delete cluster pod done No resources found in ns-nluhk namespace. check cluster resource non-exist OK: pvc No resources found in ns-nluhk namespace. delete cluster done No resources found in ns-nluhk namespace. No resources found in ns-nluhk namespace. No resources found in ns-nluhk namespace. Etcd Test Suite All Done! --------------------------------------Etcd (Topology = Replicas 3) Test Result-------------------------------------- [PASSED]|[Create]|[ClusterDefinition=etcd;]|[Description=Create a cluster with the specified cluster definition etcd] [PASSED]|[Connect]|[ComponentName=etcd]|[Description=Connect to the cluster] [PASSED]|[No-Failover]|[HA=Network Partition;Durations=2m;ComponentName=etcd]|[Description=Simulates network partition fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to partition network.] [PASSED]|[Failover]|[HA=OOM;Durations=2m;ComponentName=etcd]|[Description=Simulates conditions where pods experience OOM either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Memory load.] [PASSED]|[HorizontalScaling Out]|[ComponentName=etcd]|[Description=HorizontalScaling Out the cluster specify component etcd] [PASSED]|[HorizontalScaling In]|[ComponentName=etcd]|[Description=HorizontalScaling In the cluster specify component etcd] [PASSED]|[SwitchOver]|[ComponentName=etcd]|[Description=SwitchOver the cluster specify component etcd] [PASSED]|[Failover]|[HA=Network Bandwidth;Durations=2m;ComponentName=etcd]|[Description=Simulates network bandwidth fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to bandwidth network.] [PASSED]|[No-Failover]|[HA=Network Duplicate;Durations=2m;ComponentName=etcd]|[Description=Simulates network duplicate fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to duplicate network.] [PASSED]|[Failover]|[HA=Pod Kill;ComponentName=etcd]|[Description=Simulates conditions where pods experience kill for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to kill.] [PASSED]|[HscaleOfflineInstances]|[ComponentName=etcd]|[Description=Hscale the cluster instances offline specify component etcd] [PASSED]|[HscaleOnlineInstances]|[ComponentName=etcd]|[Description=Hscale the cluster instances online specify component etcd] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[Failover]|[HA=Pod Failure;Durations=2m;ComponentName=etcd]|[Description=Simulates conditions where pods experience failure for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to failure.] [PASSED]|[Failover]|[HA=Evicting Pod;ComponentName=etcd]|[Description=Simulates conditions where pods evicting either due to node drained thereby testing the application's resilience to unavailability of some replicas due to evicting.] [PASSED]|[No-Failover]|[HA=DNS Random;Durations=2m;ComponentName=etcd]|[Description=Simulates conditions where pods experience random IP addresses being returned by the DNS service for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to the DNS service returning random IP addresses.] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[Failover]|[HA=Network Delay;Durations=2m;ComponentName=etcd]|[Description=Simulates network delay fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to delay network.] [PASSED]|[Failover]|[HA=Full CPU;Durations=2m;ComponentName=etcd]|[Description=Simulates conditions where pods experience CPU full either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high CPU load.] [PASSED]|[Failover]|[HA=Network Loss;Durations=2m;ComponentName=etcd]|[Description=Simulates network loss fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to loss network.] [PASSED]|[Failover]|[HA=Kill 1;ComponentName=etcd]|[Description=Simulates conditions where process 1 killed either due to expected/undesired processes thereby testing the application's resilience to unavailability of some replicas due to abnormal termination signals.] [PASSED]|[No-Failover]|[HA=Time Offset;Durations=2m;ComponentName=etcd]|[Description=Simulates a time offset scenario thereby testing the application's resilience to potential slowness/unavailability of some replicas due to time offset.] [PASSED]|[VerticalScaling]|[ComponentName=etcd]|[Description=VerticalScaling the cluster specify component etcd] [PASSED]|[VolumeExpansion]|[ComponentName=etcd]|[Description=VolumeExpansion the cluster specify component etcd] [PASSED]|[Failover]|[HA=Connection Stress;ComponentName=etcd]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[Failover]|[HA=Network Corrupt;Durations=2m;ComponentName=etcd]|[Description=Simulates network corrupt fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to corrupt network.] [PASSED]|[No-Failover]|[HA=DNS Error;Durations=2m;ComponentName=etcd]|[Description=Simulates conditions where pods experience DNS service errors for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to DNS service errors.] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Backup]|[BackupMethod=datafile]|[Description=The cluster datafile Backup] [PASSED]|[Restore]|[BackupMethod=datafile]|[Description=The cluster datafile Restore] [PASSED]|[Connect]|[ComponentName=etcd]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=datafile]|[Description=Delete the datafile restore cluster] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]