source commons files source engines files source kubeblocks files source kubedb files CLUSTER_NAME: `kubectl get namespace | grep ns-nzwma ` `kubectl create namespace ns-nzwma` namespace/ns-nzwma created create namespace ns-nzwma done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.1` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 33.6M 100 33.6M 0 0 99.3M 0 --:--:-- --:--:-- --:--:-- 99.3M kbcli installed successfully. Kubernetes: v1.32.6 KubeBlocks: 1.0.1 kbcli: 1.0.1 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.1 done Kubernetes: v1.32.6 KubeBlocks: 1.0.1 kbcli: 1.0.1 Kubernetes Env: v1.32.6 check snapshot controller check snapshot controller done POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default KubeBlocks version is:1.0.1 skip upgrade KubeBlocks current KubeBlocks version: 1.0.1 Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:mdit set component version set component version:elasticsearch set service versions:8.15.5,8.8.2,8.1.3,7.10.1,7.8.1,7.7.1,6.8.23 set service versions sorted:6.8.23,7.7.1,7.8.1,7.10.1,8.1.3,8.8.2,8.15.5 set elasticsearch component definition set elasticsearch component definition elasticsearch-data-8-1.0.1 REPORT_COUNT 0:0 set replicas first:3,6.8.23|3,7.7.1|3,7.8.1|3,7.10.1|3,8.1.3|3,8.8.2|3,8.15.5 set replicas third:3,8.8.2 set replicas fourth:3,8.1.3 set minimum cmpv service version set minimum cmpv service version replicas:3,8.1.3 REPORT_COUNT:1 CLUSTER_TOPOLOGY:multi-node topology multi-node found in cluster definition elasticsearch set elasticsearch component definition set elasticsearch component definition elasticsearch-master-7-1.0.1 LIMIT_CPU:0.5 LIMIT_MEMORY:2 storage size: 20 CLUSTER_NAME:elastics-tcnsxn No resources found in ns-nzwma namespace. pod_info: termination_policy:WipeOut create 3 replica WipeOut elasticsearch cluster check component definition set component definition by component version check cmpd by labels set component definition1: elasticsearch-8-1.0.1 by component version:elasticsearch apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: elastics-tcnsxn namespace: ns-nzwma spec: terminationPolicy: WipeOut componentSpecs: - name: master componentDef: elasticsearch-master-8-1.0.1 serviceVersion: 8.1.3 configs: - name: es-cm variables: version: 8.1.3 schedulingPolicy: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: elastics-tcnsxn apps.kubeblocks.io/component-name: master topologyKey: kubernetes.io/hostname weight: 100 disableExporter: false replicas: 3 resources: requests: cpu: 500m memory: 2Gi limits: cpu: 500m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi tls: false - name: data componentDef: elasticsearch-data-8-1.0.1 serviceVersion: 8.1.3 configs: - name: es-cm variables: version: 8.1.3 schedulingPolicy: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: elastics-tcnsxn apps.kubeblocks.io/component-name: data topologyKey: kubernetes.io/hostname weight: 100 disableExporter: false replicas: 3 resources: requests: cpu: 500m memory: 2Gi limits: cpu: 500m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi tls: false - name: kibana componentDef: kibana-8-1.0.1 serviceVersion: 8.1.3 replicas: 1 podUpdatePolicy: PreferInPlace disableExporter: false resources: requests: cpu: 500m memory: 2Gi limits: cpu: 500m memory: 2Gi tls: false `kubectl apply -f test_create_elastics-tcnsxn.yaml` cluster.apps.kubeblocks.io/elastics-tcnsxn created apply test_create_elastics-tcnsxn.yaml Success `rm -rf test_create_elastics-tcnsxn.yaml` check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Creating Sep 11,2025 17:21 UTC+0800 cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=elastics-tcnsxn` set secret: elastics-tcnsxn-master-account-elastic `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.username***"` `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.password***"` `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.port***"` DB_USERNAME:elastic;DB_PASSWORD:HQoM46m657;DB_PORT:9200;DB_DATABASE:elastic check pod elastics-tcnsxn-master-0 container_name elasticsearch exist password HQoM46m657 check pod elastics-tcnsxn-master-0 container_name exporter exist password HQoM46m657 check pod elastics-tcnsxn-master-0 container_name es-agent exist password HQoM46m657 check pod elastics-tcnsxn-master-0 container_name kbagent exist password HQoM46m657 No container logs contain secret password. describe cluster `kbcli cluster describe elastics-tcnsxn --namespace ns-nzwma ` Name: elastics-tcnsxn Created Time: Sep 11,2025 17:21 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-nzwma Running WipeOut Endpoints: COMPONENT INTERNAL EXTERNAL master elastics-tcnsxn-master-agent.ns-nzwma.svc.cluster.local:8080 elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200 data elastics-tcnsxn-data-agent.ns-nzwma.svc.cluster.local:8080 elastics-tcnsxn-data-http.ns-nzwma.svc.cluster.local:9200 kibana elastics-tcnsxn-kibana-http.ns-nzwma.svc.cluster.local:5601 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME data 8.1.3 elastics-tcnsxn-data-0 Running 0 aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 data 8.1.3 elastics-tcnsxn-data-1 Running 0 aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 data 8.1.3 elastics-tcnsxn-data-2 Running 0 aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 kibana 8.1.3 elastics-tcnsxn-kibana-0 Running 0 aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 master 8.1.3 elastics-tcnsxn-master-0 Running 0 aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 master 8.1.3 elastics-tcnsxn-master-1 Running 0 aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 master 8.1.3 elastics-tcnsxn-master-2 Running 0 aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS master 500m / 500m 2Gi / 2Gi data:20Gi default data 500m / 500m 2Gi / 2Gi data:20Gi default kibana 500m / 500m 2Gi / 2Gi Images: COMPONENT COMPONENT-DEFINITION IMAGE master elasticsearch-master-8-1.0.1 docker.io/apecloud/elasticsearch:8.1.3 docker.io/apecloud/elasticsearch-exporter:v1.7.0 docker.io/apecloud/curl-jq:0.1.0 data elasticsearch-data-8-1.0.1 docker.io/apecloud/elasticsearch:8.1.3 docker.io/apecloud/elasticsearch-exporter:v1.7.0 docker.io/apecloud/curl-jq:0.1.0 kibana kibana-8-1.0.1 docker.io/apecloud/kibana:8.1.3 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-nzwma elastics-tcnsxn `kbcli cluster label elastics-tcnsxn app.kubernetes.io/instance- --namespace ns-nzwma ` label "app.kubernetes.io/instance" not found. `kbcli cluster label elastics-tcnsxn app.kubernetes.io/instance=elastics-tcnsxn --namespace ns-nzwma ` `kbcli cluster label elastics-tcnsxn --list --namespace ns-nzwma ` NAME NAMESPACE LABELS elastics-tcnsxn ns-nzwma app.kubernetes.io/instance=elastics-tcnsxn label cluster app.kubernetes.io/instance=elastics-tcnsxn Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=elastics-tcnsxn --namespace ns-nzwma ` `kbcli cluster label elastics-tcnsxn --list --namespace ns-nzwma ` NAME NAMESPACE LABELS elastics-tcnsxn ns-nzwma app.kubernetes.io/instance=elastics-tcnsxn case.name=kbcli.test1 label cluster case.name=kbcli.test1 Success `kbcli cluster label elastics-tcnsxn case.name=kbcli.test2 --overwrite --namespace ns-nzwma ` `kbcli cluster label elastics-tcnsxn --list --namespace ns-nzwma ` NAME NAMESPACE LABELS elastics-tcnsxn ns-nzwma app.kubernetes.io/instance=elastics-tcnsxn case.name=kbcli.test2 label cluster case.name=kbcli.test2 Success `kbcli cluster label elastics-tcnsxn case.name- --namespace ns-nzwma ` `kbcli cluster label elastics-tcnsxn --list --namespace ns-nzwma ` NAME NAMESPACE LABELS elastics-tcnsxn ns-nzwma app.kubernetes.io/instance=elastics-tcnsxn delete cluster label case.name Success cluster connect No resources found in ns-nzwma namespace. `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh ` Defaulted container "elasticsearch" out of: elasticsearch, exporter, es-agent, kbagent, prepare-plugins (init), install-plugins (init), install-es-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 462 100 462 0 0 46200 0 --:--:-- --:--:-- --:--:-- 46200 *** "cluster_name" : "ns-nzwma", "status" : "green", "timed_out" : false, "number_of_nodes" : 6, "number_of_data_nodes" : 3, "active_primary_shards" : 7, "active_shards" : 14, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 *** connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-elastics-tcnsxn" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-elastics-tcnsxn --namespace ns-nzwma ` Error from server (NotFound): pods "test-db-client-executionloop-elastics-tcnsxn" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-elastics-tcnsxn" not found `kubectl get secrets -l app.kubernetes.io/instance=elastics-tcnsxn` set secret: elastics-tcnsxn-master-account-elastic `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.username***"` `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.password***"` `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.port***"` DB_USERNAME:elastic;DB_PASSWORD:HQoM46m657;DB_PORT:9200;DB_DATABASE:elastic No resources found in ns-nzwma namespace. apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-elastics-tcnsxn namespace: ns-nzwma spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local" - "--user" - "elastic" - "--password" - "HQoM46m657" - "--port" - "9200" - "--dbtype" - "elasticsearch" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-elastics-tcnsxn.yaml` pod/test-db-client-executionloop-elastics-tcnsxn created apply test-db-client-executionloop-elastics-tcnsxn.yaml Success `rm -rf test-db-client-executionloop-elastics-tcnsxn.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 5s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 9s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 14s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 20s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 25s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 30s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 35s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 40s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 45s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 50s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 56s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 1/1 Running 0 61s check pod test-db-client-executionloop-elastics-tcnsxn status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-tcnsxn 0/1 Completed 0 66s check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done --host elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local --user elastic --password HQoM46m657 --port 9200 --dbtype elasticsearch --test executionloop --duration 60 --interval 1 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] Execution loop start: Index executions_loop_index does not exist. Creating index... Index executions_loop_index created successfully. Execution loop start: insert:executions_loop_index:***"id": "1757582748438", "name": "executions_loop_1", "value": "1"*** [ 1s ] executions total: 9 successful: 9 failed: 0 disconnect: 0 [ 2s ] executions total: 58 successful: 58 failed: 0 disconnect: 0 [ 3s ] executions total: 122 successful: 122 failed: 0 disconnect: 0 [ 4s ] executions total: 189 successful: 189 failed: 0 disconnect: 0 [ 5s ] executions total: 271 successful: 271 failed: 0 disconnect: 0 [ 6s ] executions total: 345 successful: 345 failed: 0 disconnect: 0 [ 7s ] executions total: 422 successful: 422 failed: 0 disconnect: 0 [ 8s ] executions total: 489 successful: 489 failed: 0 disconnect: 0 [ 9s ] executions total: 558 successful: 558 failed: 0 disconnect: 0 [ 10s ] executions total: 648 successful: 648 failed: 0 disconnect: 0 [ 11s ] executions total: 712 successful: 712 failed: 0 disconnect: 0 [ 12s ] executions total: 788 successful: 788 failed: 0 disconnect: 0 [ 13s ] executions total: 854 successful: 854 failed: 0 disconnect: 0 [ 14s ] executions total: 936 successful: 936 failed: 0 disconnect: 0 [ 15s ] executions total: 1020 successful: 1020 failed: 0 disconnect: 0 [ 16s ] executions total: 1100 successful: 1100 failed: 0 disconnect: 0 [ 17s ] executions total: 1161 successful: 1161 failed: 0 disconnect: 0 [ 18s ] executions total: 1228 successful: 1228 failed: 0 disconnect: 0 [ 19s ] executions total: 1298 successful: 1298 failed: 0 disconnect: 0 [ 20s ] executions total: 1337 successful: 1337 failed: 0 disconnect: 0 [ 21s ] executions total: 1425 successful: 1425 failed: 0 disconnect: 0 [ 22s ] executions total: 1506 successful: 1506 failed: 0 disconnect: 0 [ 23s ] executions total: 1580 successful: 1580 failed: 0 disconnect: 0 [ 24s ] executions total: 1648 successful: 1648 failed: 0 disconnect: 0 [ 25s ] executions total: 1702 successful: 1702 failed: 0 disconnect: 0 [ 26s ] executions total: 1766 successful: 1766 failed: 0 disconnect: 0 [ 27s ] executions total: 1822 successful: 1822 failed: 0 disconnect: 0 [ 28s ] executions total: 1881 successful: 1881 failed: 0 disconnect: 0 [ 29s ] executions total: 1937 successful: 1937 failed: 0 disconnect: 0 [ 30s ] executions total: 2011 successful: 2011 failed: 0 disconnect: 0 [ 31s ] executions total: 2096 successful: 2096 failed: 0 disconnect: 0 [ 32s ] executions total: 2173 successful: 2173 failed: 0 disconnect: 0 [ 33s ] executions total: 2267 successful: 2267 failed: 0 disconnect: 0 [ 34s ] executions total: 2351 successful: 2351 failed: 0 disconnect: 0 [ 35s ] executions total: 2431 successful: 2431 failed: 0 disconnect: 0 [ 36s ] executions total: 2496 successful: 2496 failed: 0 disconnect: 0 [ 37s ] executions total: 2583 successful: 2583 failed: 0 disconnect: 0 [ 38s ] executions total: 2636 successful: 2636 failed: 0 disconnect: 0 [ 39s ] executions total: 2688 successful: 2688 failed: 0 disconnect: 0 [ 40s ] executions total: 2765 successful: 2765 failed: 0 disconnect: 0 [ 41s ] executions total: 2828 successful: 2828 failed: 0 disconnect: 0 [ 42s ] executions total: 2903 successful: 2903 failed: 0 disconnect: 0 [ 43s ] executions total: 2989 successful: 2989 failed: 0 disconnect: 0 [ 44s ] executions total: 3048 successful: 3048 failed: 0 disconnect: 0 [ 45s ] executions total: 3127 successful: 3127 failed: 0 disconnect: 0 [ 46s ] executions total: 3197 successful: 3197 failed: 0 disconnect: 0 [ 47s ] executions total: 3285 successful: 3285 failed: 0 disconnect: 0 [ 48s ] executions total: 3377 successful: 3377 failed: 0 disconnect: 0 [ 49s ] executions total: 3468 successful: 3468 failed: 0 disconnect: 0 [ 50s ] executions total: 3558 successful: 3558 failed: 0 disconnect: 0 [ 51s ] executions total: 3646 successful: 3646 failed: 0 disconnect: 0 [ 52s ] executions total: 3728 successful: 3728 failed: 0 disconnect: 0 [ 53s ] executions total: 3823 successful: 3823 failed: 0 disconnect: 0 [ 54s ] executions total: 3916 successful: 3916 failed: 0 disconnect: 0 [ 55s ] executions total: 4013 successful: 4013 failed: 0 disconnect: 0 [ 56s ] executions total: 4099 successful: 4099 failed: 0 disconnect: 0 [ 57s ] executions total: 4197 successful: 4197 failed: 0 disconnect: 0 [ 58s ] executions total: 4288 successful: 4288 failed: 0 disconnect: 0 [ 59s ] executions total: 4383 successful: 4383 failed: 0 disconnect: 0 [ 60s ] executions total: 4428 successful: 4428 failed: 0 disconnect: 0 Test Result: Total Executions: 4428 Successful Executions: 4428 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: elasticsearch Host: elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local Port: 9200 Database: Table: User: elastic Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 4428 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-elastics-tcnsxn --namespace ns-nzwma ` pod/test-db-client-executionloop-elastics-tcnsxn patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-elastics-tcnsxn" force deleted No resources found in ns-nzwma namespace. `echo "curl -X POST 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/boss/_doc/1?pretty' -H 'Content-Type: application/json' -d '***\"datainsert\":\"odhzs\"***'" | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` Defaulted container "elasticsearch" out of: elasticsearch, exporter, es-agent, kbagent, prepare-plugins (init), install-plugins (init), install-es-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 220 100 198 100 22 451 50 --:--:-- --:--:-- --:--:-- 500 100 220 100 198 100 22 451 50 --:--:-- --:--:-- --:--:-- 500 *** "_index" : "boss", "_id" : "1", "_version" : 1, "result" : "created", "_shards" : *** "total" : 2, "successful" : 1, "failed" : 0 ***, "_seq_no" : 0, "_primary_term" : 1 *** add consistent data odhzs Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=data --namespace ns-nzwma | (grep "data" || true )` cluster data scale-out check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-tcnsxn namespace. `kbcli cluster scale-out elastics-tcnsxn --auto-approve --force=true --components data --replicas 1 --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-horizontalscaling-l7kpd created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-horizontalscaling-l7kpd -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-horizontalscaling-l7kpd ns-nzwma HorizontalScaling elastics-tcnsxn data Creating -/- Sep 11,2025 17:27 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-3 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:27 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done No resources found in elastics-tcnsxn namespace. check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-horizontalscaling-l7kpd ns-nzwma HorizontalScaling elastics-tcnsxn data Succeed 1/1 Sep 11,2025 17:27 UTC+0800 check ops status done ops_status:elastics-tcnsxn-horizontalscaling-l7kpd ns-nzwma HorizontalScaling elastics-tcnsxn data Succeed 1/1 Sep 11,2025 17:27 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-horizontalscaling-l7kpd --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-horizontalscaling-l7kpd patched `kbcli cluster delete-ops --name elastics-tcnsxn-horizontalscaling-l7kpd --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-horizontalscaling-l7kpd deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=data --namespace ns-nzwma | (grep "data" || true )` cluster data scale-in check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-tcnsxn namespace. `kbcli cluster scale-in elastics-tcnsxn --auto-approve --force=true --components data --replicas 1 --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-horizontalscaling-mt8lt created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-horizontalscaling-mt8lt -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-horizontalscaling-mt8lt ns-nzwma HorizontalScaling elastics-tcnsxn data Running -/- Sep 11,2025 17:29 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done No resources found in elastics-tcnsxn namespace. check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-horizontalscaling-mt8lt ns-nzwma HorizontalScaling elastics-tcnsxn data Succeed 1/1 Sep 11,2025 17:29 UTC+0800 check ops status done ops_status:elastics-tcnsxn-horizontalscaling-mt8lt ns-nzwma HorizontalScaling elastics-tcnsxn data Succeed 1/1 Sep 11,2025 17:29 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-horizontalscaling-mt8lt --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-horizontalscaling-mt8lt patched `kbcli cluster delete-ops --name elastics-tcnsxn-horizontalscaling-mt8lt --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-horizontalscaling-mt8lt deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success cluster master scale-out cluster master scale-out replicas: 4 check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-tcnsxn namespace. `kbcli cluster scale-out elastics-tcnsxn --auto-approve --force=true --components master --replicas 1 --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-horizontalscaling-m7ffv created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-horizontalscaling-m7ffv -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-horizontalscaling-m7ffv ns-nzwma HorizontalScaling elastics-tcnsxn master Running -/- Sep 11,2025 17:29 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:34 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:32 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:30 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:40 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:38 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:36 UTC+0800 elastics-tcnsxn-master-3 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:29 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done No resources found in elastics-tcnsxn namespace. check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-horizontalscaling-m7ffv ns-nzwma HorizontalScaling elastics-tcnsxn master Succeed 1/1 Sep 11,2025 17:29 UTC+0800 check ops status done ops_status:elastics-tcnsxn-horizontalscaling-m7ffv ns-nzwma HorizontalScaling elastics-tcnsxn master Succeed 1/1 Sep 11,2025 17:29 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-horizontalscaling-m7ffv --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-horizontalscaling-m7ffv patched `kbcli cluster delete-ops --name elastics-tcnsxn-horizontalscaling-m7ffv --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-horizontalscaling-m7ffv deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success cluster master scale-in cluster master scale-in replicas: 3 check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-tcnsxn namespace. `kbcli cluster scale-in elastics-tcnsxn --auto-approve --force=true --components master --replicas 1 --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-horizontalscaling-sdpdj created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-horizontalscaling-sdpdj -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-horizontalscaling-sdpdj ns-nzwma HorizontalScaling elastics-tcnsxn master Running -/- Sep 11,2025 17:41 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:46 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:44 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:42 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:46 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:44 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:42 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done No resources found in elastics-tcnsxn namespace. check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-horizontalscaling-sdpdj ns-nzwma HorizontalScaling elastics-tcnsxn master Succeed 1/1 Sep 11,2025 17:41 UTC+0800 check ops status done ops_status:elastics-tcnsxn-horizontalscaling-sdpdj ns-nzwma HorizontalScaling elastics-tcnsxn master Succeed 1/1 Sep 11,2025 17:41 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-horizontalscaling-sdpdj --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-horizontalscaling-sdpdj patched `kbcli cluster delete-ops --name elastics-tcnsxn-horizontalscaling-sdpdj --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-horizontalscaling-sdpdj deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-tcnsxn --auto-approve --force=true --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-restart-n7wnj created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-restart-n7wnj -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-restart-n7wnj ns-nzwma Restart elastics-tcnsxn master,data,kibana Running -/- Sep 11,2025 17:48 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:53 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:51 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:49 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:48 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:53 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:51 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:49 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-restart-n7wnj ns-nzwma Restart elastics-tcnsxn master,data,kibana Succeed 7/7 Sep 11,2025 17:48 UTC+0800 check ops status done ops_status:elastics-tcnsxn-restart-n7wnj ns-nzwma Restart elastics-tcnsxn master,data,kibana Succeed 7/7 Sep 11,2025 17:48 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-restart-n7wnj --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-restart-n7wnj patched `kbcli cluster delete-ops --name elastics-tcnsxn-restart-n7wnj --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-restart-n7wnj deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=data --namespace ns-nzwma | (grep "data" || true )` `kubectl get pvc -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=data,apps.kubeblocks.io/vct-name=data --namespace ns-nzwma ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-tcnsxn namespace. `kbcli cluster volume-expand elastics-tcnsxn --auto-approve --force=true --components data --volume-claim-templates data --storage 22Gi --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-volumeexpansion-z5hh7 created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-volumeexpansion-z5hh7 -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-volumeexpansion-z5hh7 ns-nzwma VolumeExpansion elastics-tcnsxn data Running -/- Sep 11,2025 17:55 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:53 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:51 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:49 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:48 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:53 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:51 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:49 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done No resources found in elastics-tcnsxn namespace. check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-volumeexpansion-z5hh7 ns-nzwma VolumeExpansion elastics-tcnsxn data Succeed 3/3 Sep 11,2025 17:55 UTC+0800 check ops status done ops_status:elastics-tcnsxn-volumeexpansion-z5hh7 ns-nzwma VolumeExpansion elastics-tcnsxn data Succeed 3/3 Sep 11,2025 17:55 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-volumeexpansion-z5hh7 --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-volumeexpansion-z5hh7 patched `kbcli cluster delete-ops --name elastics-tcnsxn-volumeexpansion-z5hh7 --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-volumeexpansion-z5hh7 deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-tcnsxn --auto-approve --force=true --components master --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-restart-vvdjv created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-restart-vvdjv -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-restart-vvdjv ns-nzwma Restart elastics-tcnsxn master Running -/- Sep 11,2025 18:01 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:53 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:51 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:49 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:48 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:05 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:03 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:01 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-restart-vvdjv ns-nzwma Restart elastics-tcnsxn master Succeed 3/3 Sep 11,2025 18:01 UTC+0800 check ops status done ops_status:elastics-tcnsxn-restart-vvdjv ns-nzwma Restart elastics-tcnsxn master Succeed 3/3 Sep 11,2025 18:01 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-restart-vvdjv --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-restart-vvdjv patched `kbcli cluster delete-ops --name elastics-tcnsxn-restart-vvdjv --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-restart-vvdjv deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success test failover connectionstress check cluster status before cluster-failover-connectionstress check cluster status done cluster_status:Running Error from server (NotFound): pods "test-db-client-connectionstress-elastics-tcnsxn" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-elastics-tcnsxn --namespace ns-nzwma ` Error from server (NotFound): pods "test-db-client-connectionstress-elastics-tcnsxn" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-elastics-tcnsxn" not found `kubectl get secrets -l app.kubernetes.io/instance=elastics-tcnsxn` set secret: elastics-tcnsxn-master-account-elastic `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.username***"` `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.password***"` `kubectl get secrets elastics-tcnsxn-master-account-elastic -o jsonpath="***.data.port***"` DB_USERNAME:elastic;DB_PASSWORD:HQoM46m657;DB_PORT:9200;DB_DATABASE:elastic No resources found in ns-nzwma namespace. apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-elastics-tcnsxn namespace: ns-nzwma spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local" - "--user" - "elastic" - "--password" - "HQoM46m657" - "--port" - "9200" - "--database" - "elastic" - "--dbtype" - "elasticsearch" - "--test" - "connectionstress" - "--connections" - "1024" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-elastics-tcnsxn.yaml` pod/test-db-client-connectionstress-elastics-tcnsxn created apply test-db-client-connectionstress-elastics-tcnsxn.yaml Success `rm -rf test-db-client-connectionstress-elastics-tcnsxn.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-elastics-tcnsxn 1/1 Running 0 5s check pod test-db-client-connectionstress-elastics-tcnsxn status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-elastics-tcnsxn 0/1 Completed 0 18s check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:53 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:51 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:49 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:48 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:05 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:03 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 500m / 500m 2Gi / 2Gi data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:01 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done --host elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local --user elastic --password HQoM46m657 --port 9200 --database elastic --dbtype elasticsearch --test connectionstress --connections 1024 --duration 60 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] Test Result: Created 1024 connections Connection Information: Database Type: elasticsearch Host: elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local Port: 9200 Database: elastic Table: User: elastic Org: Access Mode: mysql Test Type: connectionstress Connection Count: 1024 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-elastics-tcnsxn --namespace ns-nzwma ` pod/test-db-client-connectionstress-elastics-tcnsxn patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-elastics-tcnsxn" force deleted check failover pod name failover pod name:elastics-tcnsxn-master-0 failover connectionstress Success No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale elastics-tcnsxn --auto-approve --force=true --components master --cpu 600m --memory 2.1Gi --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-verticalscaling-65jbr created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-verticalscaling-65jbr -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-verticalscaling-65jbr ns-nzwma VerticalScaling elastics-tcnsxn master Running -/- Sep 11,2025 18:08 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:53 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:51 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:49 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:48 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:12 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:10 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:08 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-verticalscaling-65jbr ns-nzwma VerticalScaling elastics-tcnsxn master Succeed 3/3 Sep 11,2025 18:08 UTC+0800 check ops status done ops_status:elastics-tcnsxn-verticalscaling-65jbr ns-nzwma VerticalScaling elastics-tcnsxn master Succeed 3/3 Sep 11,2025 18:08 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-verticalscaling-65jbr --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-verticalscaling-65jbr patched `kbcli cluster delete-ops --name elastics-tcnsxn-verticalscaling-65jbr --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-verticalscaling-65jbr deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success check component kibana exists `kubectl get components -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=kibana --namespace ns-nzwma | (grep "kibana" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-tcnsxn --auto-approve --force=true --components kibana --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-restart-dmndl created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-restart-dmndl -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-restart-dmndl ns-nzwma Restart elastics-tcnsxn kibana Running -/- Sep 11,2025 18:13 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:53 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:51 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:49 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:14 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:12 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:10 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:08 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-restart-dmndl ns-nzwma Restart elastics-tcnsxn kibana Succeed 1/1 Sep 11,2025 18:13 UTC+0800 check ops status done ops_status:elastics-tcnsxn-restart-dmndl ns-nzwma Restart elastics-tcnsxn kibana Succeed 1/1 Sep 11,2025 18:13 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-restart-dmndl --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-restart-dmndl patched `kbcli cluster delete-ops --name elastics-tcnsxn-restart-dmndl --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-restart-dmndl deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=data --namespace ns-nzwma | (grep "data" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-tcnsxn --auto-approve --force=true --components data --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-restart-7mg9l created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-restart-7mg9l -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-restart-7mg9l ns-nzwma Restart elastics-tcnsxn data Running -/- Sep 11,2025 18:15 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:20 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:18 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:16 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:14 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:12 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:10 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:08 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-restart-7mg9l ns-nzwma Restart elastics-tcnsxn data Succeed 3/3 Sep 11,2025 18:15 UTC+0800 check ops status done ops_status:elastics-tcnsxn-restart-7mg9l ns-nzwma Restart elastics-tcnsxn data Succeed 3/3 Sep 11,2025 18:15 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-restart-7mg9l --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-restart-7mg9l patched `kbcli cluster delete-ops --name elastics-tcnsxn-restart-7mg9l --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-restart-7mg9l deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success cluster hscale offline instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: elastics-tcnsxn-hscaleoffinstance- labels: app.kubernetes.io/instance: elastics-tcnsxn app.kubernetes.io/managed-by: kubeblocks namespace: ns-nzwma spec: type: HorizontalScaling clusterName: elastics-tcnsxn force: true horizontalScaling: - componentName: master scaleIn: onlineInstancesToOffline: - elastics-tcnsxn-master-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_elastics-tcnsxn.yaml` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-hscaleoffinstance-7d2vp created create test_ops_cluster_elastics-tcnsxn.yaml Success `rm -rf test_ops_cluster_elastics-tcnsxn.yaml` check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-hscaleoffinstance-7d2vp ns-nzwma HorizontalScaling elastics-tcnsxn master Creating -/- Sep 11,2025 18:21 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:20 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:18 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:22 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:14 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:10 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:22 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-1 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-hscaleoffinstance-7d2vp ns-nzwma HorizontalScaling elastics-tcnsxn master Succeed 1/1 Sep 11,2025 18:21 UTC+0800 check ops status done ops_status:elastics-tcnsxn-hscaleoffinstance-7d2vp ns-nzwma HorizontalScaling elastics-tcnsxn master Succeed 1/1 Sep 11,2025 18:21 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-hscaleoffinstance-7d2vp --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-hscaleoffinstance-7d2vp patched `kbcli cluster delete-ops --name elastics-tcnsxn-hscaleoffinstance-7d2vp --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-hscaleoffinstance-7d2vp deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-1 --namespace ns-nzwma -- sh` check db_client batch data Success cluster hscale online instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: elastics-tcnsxn-hscaleoninstance- labels: app.kubernetes.io/instance: elastics-tcnsxn app.kubernetes.io/managed-by: kubeblocks namespace: ns-nzwma spec: type: HorizontalScaling clusterName: elastics-tcnsxn force: true horizontalScaling: - componentName: master scaleOut: offlineInstancesToOnline: - elastics-tcnsxn-master-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_elastics-tcnsxn.yaml` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-hscaleoninstance-894b7 created create test_ops_cluster_elastics-tcnsxn.yaml Success `rm -rf test_ops_cluster_elastics-tcnsxn.yaml` check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-hscaleoninstance-894b7 ns-nzwma HorizontalScaling elastics-tcnsxn master Running 0/1 Sep 11,2025 18:31 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:20 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:18 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:32 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:14 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:31 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:10 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:22 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-hscaleoninstance-894b7 ns-nzwma HorizontalScaling elastics-tcnsxn master Succeed 1/1 Sep 11,2025 18:31 UTC+0800 check ops status done ops_status:elastics-tcnsxn-hscaleoninstance-894b7 ns-nzwma HorizontalScaling elastics-tcnsxn master Succeed 1/1 Sep 11,2025 18:31 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-hscaleoninstance-894b7 --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-hscaleoninstance-894b7 patched `kbcli cluster delete-ops --name elastics-tcnsxn-hscaleoninstance-894b7 --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-hscaleoninstance-894b7 deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop elastics-tcnsxn --auto-approve --force=true --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-stop-scvpc created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-stop-scvpc -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-stop-scvpc ns-nzwma Stop elastics-tcnsxn data,kibana,master Running 0/7 Sep 11,2025 18:36 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Stopping Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-stop-scvpc ns-nzwma Stop elastics-tcnsxn data,kibana,master Succeed 7/7 Sep 11,2025 18:36 UTC+0800 check ops status done ops_status:elastics-tcnsxn-stop-scvpc ns-nzwma Stop elastics-tcnsxn data,kibana,master Succeed 7/7 Sep 11,2025 18:36 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-stop-scvpc --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-stop-scvpc patched `kbcli cluster delete-ops --name elastics-tcnsxn-stop-scvpc --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-stop-scvpc deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start elastics-tcnsxn --force=true --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-start-kh7c2 created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-start-kh7c2 -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-start-kh7c2 ns-nzwma Start elastics-tcnsxn data,kibana,master Running 0/7 Sep 11,2025 18:38 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 500m / 500m 2Gi / 2Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:38 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-start-kh7c2 ns-nzwma Start elastics-tcnsxn data,kibana,master Succeed 7/7 Sep 11,2025 18:38 UTC+0800 check ops status done ops_status:elastics-tcnsxn-start-kh7c2 ns-nzwma Start elastics-tcnsxn data,kibana,master Succeed 7/7 Sep 11,2025 18:38 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-start-kh7c2 --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-start-kh7c2 patched `kbcli cluster delete-ops --name elastics-tcnsxn-start-kh7c2 --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-start-kh7c2 deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success check component kibana exists `kubectl get components -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=kibana --namespace ns-nzwma | (grep "kibana" || true )` check cluster status before ops cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running `kbcli cluster vscale elastics-tcnsxn --auto-approve --force=true --components kibana --cpu 600m --memory 2.1Gi --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-verticalscaling-6nj59 created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-verticalscaling-6nj59 -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-verticalscaling-6nj59 ns-nzwma VerticalScaling elastics-tcnsxn kibana Running -/- Sep 11,2025 18:43 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 600m / 600m 2254857830400m / 2254857830400m aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:44 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:20Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:38 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-verticalscaling-6nj59 ns-nzwma VerticalScaling elastics-tcnsxn kibana Succeed 1/1 Sep 11,2025 18:43 UTC+0800 check ops status done ops_status:elastics-tcnsxn-verticalscaling-6nj59 ns-nzwma VerticalScaling elastics-tcnsxn kibana Succeed 1/1 Sep 11,2025 18:43 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-verticalscaling-6nj59 --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-verticalscaling-6nj59 patched `kbcli cluster delete-ops --name elastics-tcnsxn-verticalscaling-6nj59 --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-verticalscaling-6nj59 deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success `kubectl get pvc -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=master,apps.kubeblocks.io/vct-name=data --namespace ns-nzwma ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-tcnsxn namespace. `kbcli cluster volume-expand elastics-tcnsxn --auto-approve --force=true --components master --volume-claim-templates data --storage 21Gi --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-volumeexpansion-wql8t created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-volumeexpansion-wql8t -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-volumeexpansion-wql8t ns-nzwma VolumeExpansion elastics-tcnsxn master Creating -/- Sep 11,2025 18:45 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 500m / 500m 2Gi / 2Gi data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 600m / 600m 2254857830400m / 2254857830400m aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:44 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:38 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done No resources found in elastics-tcnsxn namespace. check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-volumeexpansion-wql8t ns-nzwma VolumeExpansion elastics-tcnsxn master Succeed 3/3 Sep 11,2025 18:45 UTC+0800 check ops status done ops_status:elastics-tcnsxn-volumeexpansion-wql8t ns-nzwma VolumeExpansion elastics-tcnsxn master Succeed 3/3 Sep 11,2025 18:45 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-volumeexpansion-wql8t --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-volumeexpansion-wql8t patched `kbcli cluster delete-ops --name elastics-tcnsxn-volumeexpansion-wql8t --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-volumeexpansion-wql8t deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-tcnsxn,apps.kubeblocks.io/component-name=data --namespace ns-nzwma | (grep "data" || true )` check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale elastics-tcnsxn --auto-approve --force=true --components data --cpu 600m --memory 2.1Gi --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-verticalscaling-jqd9x created successfully, you can view the progress: kbcli cluster describe-ops elastics-tcnsxn-verticalscaling-jqd9x -n ns-nzwma check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-verticalscaling-jqd9x ns-nzwma VerticalScaling elastics-tcnsxn data Running -/- Sep 11,2025 18:51 UTC+0800 check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 600m / 600m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:55 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 600m / 600m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:53 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 600m / 600m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:52 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 600m / 600m 2254857830400m / 2254857830400m aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:44 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:38 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-tcnsxn --status all --namespace ns-nzwma ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-tcnsxn-verticalscaling-jqd9x ns-nzwma VerticalScaling elastics-tcnsxn data Succeed 3/3 Sep 11,2025 18:51 UTC+0800 check ops status done ops_status:elastics-tcnsxn-verticalscaling-jqd9x ns-nzwma VerticalScaling elastics-tcnsxn data Succeed 3/3 Sep 11,2025 18:51 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-tcnsxn-verticalscaling-jqd9x --namespace ns-nzwma ` opsrequest.operations.kubeblocks.io/elastics-tcnsxn-verticalscaling-jqd9x patched `kbcli cluster delete-ops --name elastics-tcnsxn-verticalscaling-jqd9x --force --auto-approve --namespace ns-nzwma ` OpsRequest elastics-tcnsxn-verticalscaling-jqd9x deleted No resources found in ns-nzwma namespace. check db_client batch data count `echo "curl -X GET 'elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update elastics-tcnsxn --termination-policy=WipeOut --namespace ns-nzwma ` cluster.apps.kubeblocks.io/elastics-tcnsxn updated (no change) check cluster status `kbcli cluster list elastics-tcnsxn --show-labels --namespace ns-nzwma ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-tcnsxn ns-nzwma WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=elastics-tcnsxn check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-tcnsxn --namespace ns-nzwma ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-tcnsxn-data-0 ns-nzwma elastics-tcnsxn data Running 0 600m / 600m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:55 UTC+0800 elastics-tcnsxn-data-1 ns-nzwma elastics-tcnsxn data Running 0 600m / 600m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:53 UTC+0800 elastics-tcnsxn-data-2 ns-nzwma elastics-tcnsxn data Running 0 600m / 600m 2254857830400m / 2254857830400m data:22Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:52 UTC+0800 elastics-tcnsxn-kibana-0 ns-nzwma elastics-tcnsxn kibana Running 0 600m / 600m 2254857830400m / 2254857830400m aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:44 UTC+0800 elastics-tcnsxn-master-0 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-1 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:38 UTC+0800 elastics-tcnsxn-master-2 ns-nzwma elastics-tcnsxn master Running 0 600m / 600m 2254857830400m / 2254857830400m data:21Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:38 UTC+0800 check pod status done No resources found in ns-nzwma namespace. check cluster connect `echo "curl http://elastics-tcnsxn-master-http.ns-nzwma.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-tcnsxn-master-0 --namespace ns-nzwma -- sh` check cluster connect done cluster list-logs `kbcli cluster list-logs elastics-tcnsxn --namespace ns-nzwma ` No log files found. Error from server (NotFound): pods "elastics-tcnsxn-master-0" not found cluster logs `kbcli cluster logs elastics-tcnsxn --tail 30 --namespace ns-nzwma ` Defaulted container "elasticsearch" out of: elasticsearch, exporter, es-agent, kbagent, prepare-plugins (init), install-plugins (init), install-es-agent (init), init-kbagent (init), kbagent-worker (init) ***"@timestamp":"2025-09-11T10:56:17.395Z", "log.level": "INFO", "message":"loaded module [x-pack-stack]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.plugins.PluginsService","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:17.395Z", "log.level": "INFO", "message":"loaded module [x-pack-text-structure]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.plugins.PluginsService","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:17.396Z", "log.level": "INFO", "message":"loaded module [x-pack-voting-only-node]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.plugins.PluginsService","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:17.396Z", "log.level": "INFO", "message":"loaded module [x-pack-watcher]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.plugins.PluginsService","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:17.396Z", "log.level": "INFO", "message":"no plugins loaded", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.plugins.PluginsService","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:17.502Z", "log.level": "INFO", "message":"using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sdh)]], net usable_space [21.4gb], net total_space [21.4gb], types [ext4]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.env.NodeEnvironment","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:17.503Z", "log.level": "INFO", "message":"heap size [1gb], compressed ordinary object pointers [true]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.env.NodeEnvironment","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:17.724Z", "log.level": "INFO", "message":"node name [elastics-tcnsxn-data-0], node ID [BIBOpQkoTwuHa6KhnmAv3w], cluster name [ns-nzwma], roles [data]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"timestamp": "2025-09-11T10:56:22+00:00", "message": "readiness probe failed", "curl_rc": "7"*** ***"timestamp": "2025-09-11T10:56:27+00:00", "message": "readiness probe failed", "curl_rc": "7"*** ***"timestamp": "2025-09-11T10:56:32+00:00", "message": "readiness probe failed", "curl_rc": "7"*** ***"@timestamp":"2025-09-11T10:56:33.629Z", "log.level": "INFO", "message":"[controller/515] [Main.cc@123] controller (64 bit): Version 8.1.3 (Build 92d8267e6ebfb7) Copyright (c) 2022 Elasticsearch BV", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"ml-cpp-log-tail-thread","log.logger":"org.elasticsearch.xpack.ml.process.logging.CppLogMessageHandler","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:34.019Z", "log.level": "INFO", "message":"Security is disabled", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.xpack.security.Security","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:36.798Z", "log.level": "INFO", "message":"creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors=***es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb***]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.transport.netty4.NettyAllocator","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:36.826Z", "log.level": "INFO", "message":"using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.indices.recovery.RecoverySettings","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:36.917Z", "log.level": "INFO", "message":"using discovery type [multi-node] and seed hosts providers [settings]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.discovery.DiscoveryModule","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"timestamp": "2025-09-11T10:56:37+00:00", "message": "readiness probe failed", "curl_rc": "7"*** ***"@timestamp":"2025-09-11T10:56:40.422Z", "log.level": "INFO", "message":"initialized", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:40.423Z", "log.level": "INFO", "message":"starting ...", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:40.452Z", "log.level": "INFO", "message":"persistent cache index loaded", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.xpack.searchablesnapshots.cache.full.PersistentCache","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:40.453Z", "log.level": "INFO", "message":"deprecation component started", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.xpack.deprecation.logging.DeprecationIndexingComponent","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:40.642Z", "log.level": "INFO", "message":"publish_address ***10.244.1.236:9300***, bound_addresses ***[::]:9300***", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.transport.TransportService","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"timestamp": "2025-09-11T10:56:42+00:00", "message": "readiness probe failed", "curl_rc": "7"*** ***"@timestamp":"2025-09-11T10:56:42.399Z", "log.level": "INFO", "message":"bound or publishing to a non-loopback address, enforcing bootstrap checks", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.BootstrapChecks","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:42.401Z", "log.level": "INFO", "message":"cluster UUID [hVWxAxRBTWSgbNQ3r3Mhmw]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.cluster.coordination.Coordinator","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:43.405Z", "log.level": "INFO", "message":"master node changed ***previous [], current [***elastics-tcnsxn-master-1***_97T06q3ROioPbbmBvFFvQ***IAxmBIXNSViPBNFfuGVfvw***10.244.1.237***10.244.1.237:9300***m***]***, added ***elastics-tcnsxn-master-1***_97T06q3ROioPbbmBvFFvQ***IAxmBIXNSViPBNFfuGVfvw***10.244.1.237***10.244.1.237:9300***m***, ***elastics-tcnsxn-data-2***6TuvxI0qS8Cfk-z5u-dCLA***62TuLGJATEGphAZ05R6AdQ***10.244.3.116***10.244.3.116:9300***d***, ***elastics-tcnsxn-master-0***xH77Ae8CSPOcNGr-DjIv2Q***cx0ZSqGLR76rzKi9rdwR8g***10.244.5.20***10.244.5.20:9300***m***, ***elastics-tcnsxn-data-1***9NStcgHMSbW7Wkr3vnOsmQ***EMyX-UKIRG-_XfFEJYGR8w***10.244.5.242***10.244.5.242:9300***d***, ***elastics-tcnsxn-master-2***nPtUglEVQ1eYgrpSgeVx9A***mPcZlZGrTKa1whodkA0eyA***10.244.3.247***10.244.3.247:9300***m***, term: 24, version: 759, reason: ApplyCommitRequest***term=24, version=759, sourceNode=***elastics-tcnsxn-master-1***_97T06q3ROioPbbmBvFFvQ***IAxmBIXNSViPBNFfuGVfvw***10.244.1.237***10.244.1.237:9300***m***k8s_node_name=aks-cicdamdpool-42771698-vmss000004, xpack.installed=true***", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"elasticsearch[elastics-tcnsxn-data-0][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.service.ClusterApplierService","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:44.125Z", "log.level": "WARN", "message":"Creating processor [set_security_user] (tag [null]) on field [_security] but authentication is not currently enabled on this cluster - this processor is likely to fail at runtime if it is used", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"elasticsearch[elastics-tcnsxn-data-0][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.xpack.security.ingest.SetSecurityUserProcessor","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:45.802Z", "log.level": "INFO", "message":"license [435d276d-306d-410b-af55-4e3b084ed3ec] mode [basic] - valid", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"elasticsearch[elastics-tcnsxn-data-0][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.license.LicenseService","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:45.809Z", "log.level": "INFO", "message":"publish_address ***elastics-tcnsxn-data-0.elastics-tcnsxn-data-headless.ns-nzwma.svc.cluster.local/10.244.1.236:9200***, bound_addresses ***[::]:9200***", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.http.AbstractHttpServerTransport","elasticsearch.cluster.uuid":"hVWxAxRBTWSgbNQ3r3Mhmw","elasticsearch.node.id":"BIBOpQkoTwuHa6KhnmAv3w","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** ***"@timestamp":"2025-09-11T10:56:45.810Z", "log.level": "INFO", "message":"started", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"ES_ECS","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.cluster.uuid":"hVWxAxRBTWSgbNQ3r3Mhmw","elasticsearch.node.id":"BIBOpQkoTwuHa6KhnmAv3w","elasticsearch.node.name":"elastics-tcnsxn-data-0","elasticsearch.cluster.name":"ns-nzwma"*** delete cluster elastics-tcnsxn `kbcli cluster delete elastics-tcnsxn --auto-approve --namespace ns-nzwma ` Cluster elastics-tcnsxn deleted pod_info:elastics-tcnsxn-data-0 4/4 Running 0 85s elastics-tcnsxn-data-1 4/4 Running 0 3m6s elastics-tcnsxn-data-2 4/4 Running 0 4m52s elastics-tcnsxn-kibana-0 1/1 Running 0 12m elastics-tcnsxn-master-0 4/4 Running 0 18m elastics-tcnsxn-master-1 4/4 Running 0 18m elastics-tcnsxn-master-2 4/4 Running 0 18m pod_info:elastics-tcnsxn-data-0 2/4 Terminating 0 106s elastics-tcnsxn-data-1 2/4 Terminating 0 3m27s elastics-tcnsxn-data-2 2/4 Terminating 0 5m13s elastics-tcnsxn-kibana-0 1/1 Terminating 0 13m elastics-tcnsxn-master-0 2/4 Terminating 0 18m elastics-tcnsxn-master-1 2/4 Terminating 0 18m elastics-tcnsxn-master-2 2/4 Terminating 0 18m No resources found in ns-nzwma namespace. delete cluster pod done No resources found in ns-nzwma namespace. check cluster resource non-exist OK: pvc No resources found in ns-nzwma namespace. delete cluster done No resources found in ns-nzwma namespace. No resources found in ns-nzwma namespace. No resources found in ns-nzwma namespace. ElasticSearch Test Suite All Done! Test Engine: elasticsearch Test Type: 25 --------------------------------------ElasticSearch (Topology = multi-node Replicas 3) Test Result-------------------------------------- [PASSED]|[Create]|[ComponentDefinition=elasticsearch-8-1.0.1;ComponentVersion=elasticsearch;ServiceVersion=8.1.3;]|[Description=Create a cluster with the specified component definition elasticsearch-8-1.0.1 and component version elasticsearch and service version 8.1.3] [PASSED]|[Connect]|[ComponentName=master]|[Description=Connect to the cluster] [PASSED]|[AddData]|[Values=odhzs]|[Description=Add data to the cluster] [PASSED]|[HorizontalScaling Out]|[ComponentName=data]|[Description=HorizontalScaling Out the cluster specify component data] [PASSED]|[HorizontalScaling In]|[ComponentName=data]|[Description=HorizontalScaling In the cluster specify component data] [PASSED]|[HorizontalScaling Out]|[ComponentName=master]|[Description=HorizontalScaling Out the cluster specify component master] [PASSED]|[HorizontalScaling In]|[ComponentName=master]|[Description=HorizontalScaling In the cluster specify component master] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[VolumeExpansion]|[ComponentName=data]|[Description=VolumeExpansion the cluster specify component data] [PASSED]|[Restart]|[ComponentName=master]|[Description=Restart the cluster specify component master] [PASSED]|[No-Failover]|[HA=Connection Stress;ComponentName=master]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[VerticalScaling]|[ComponentName=master]|[Description=VerticalScaling the cluster specify component master] [PASSED]|[Restart]|[ComponentName=kibana]|[Description=Restart the cluster specify component kibana] [PASSED]|[Restart]|[ComponentName=data]|[Description=Restart the cluster specify component data] [PASSED]|[HscaleOfflineInstances]|[ComponentName=master]|[Description=Hscale the cluster instances offline specify component master] [PASSED]|[HscaleOnlineInstances]|[ComponentName=master]|[Description=Hscale the cluster instances online specify component master] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[VerticalScaling]|[ComponentName=kibana]|[Description=VerticalScaling the cluster specify component kibana] [PASSED]|[VolumeExpansion]|[ComponentName=master]|[Description=VolumeExpansion the cluster specify component master] [PASSED]|[VerticalScaling]|[ComponentName=data]|[Description=VerticalScaling the cluster specify component data] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]