source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-znapx ` `kubectl create namespace ns-znapx` namespace/ns-znapx created create namespace ns-znapx done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "0.9" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v0.9.4-beta.1` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 61 32.1M 61 19.8M 0 0 116M 0 --:--:-- --:--:-- --:--:-- 116M 100 32.1M 100 32.1M 0 0 151M 0 --:--:-- --:--:-- --:--:-- 293M kbcli installed successfully. Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 0.9.4 kbcli: 0.9.4-beta.1 WARNING: version difference between kbcli (0.9.4-beta.1) and kubeblocks (0.9.4) Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v0.9.4-beta.1 done Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 0.9.4 kbcli: 0.9.4-beta.1 WARNING: version difference between kbcli (0.9.4-beta.1) and kubeblocks (0.9.4) Kubernetes Env: v1.32.5-eks-5d4a308 POD_RESOURCES: No resources found found default storage class: gp3 kubeblocks version is:0.9.4 skip upgrade kubeblocks Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check cluster definition set component name:elasticsearch set component version set component version:elasticsearch set service versions:7.10.1,7.7.1,7.8.1,8.1.3,8.8.2,8.9.1 set service versions sorted:7.7.1,7.8.1,7.10.1,8.1.3,8.8.2,8.9.1 no cluster version found set elasticsearch component definition set elasticsearch component definition elasticsearch-8 set replicas first:3,7.7.1|3,7.8.1|3,7.10.1|3,8.1.3|3,8.8.2|3,8.9.1 set replicas third:3,8.8.2 set replicas fourth:3,8.1.3 set minimum cmpv service version set minimum cmpv service version replicas:3,8.1.3 REPORT_COUNT:1 CLUSTER_TOPOLOGY:multi-node topology multi-node found in cluster definition elasticsearch LIMIT_CPU:0.5 LIMIT_MEMORY:2 storage size: 20 No resources found in ns-znapx namespace. termination_policy:DoNotTerminate create 3 replica DoNotTerminate elasticsearch cluster check cluster definition check component definition set component definition by component version check cmpd by labels set component definition1: elasticsearch-8 by component version:elasticsearch apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: elastics-vpfimh namespace: ns-znapx annotations: kubeblocks.io/extra-env: '***"mdit-roles":"master,data,ingest,transform","mode":"multi-node"***' spec: terminationPolicy: DoNotTerminate componentSpecs: - name: mdit componentDef: elasticsearch-8 serviceAccountName: kb-elastics-vpfimh monitor: true disableExporter: false podUpdatePolicy: Recreate replicas: 3 resources: requests: cpu: 500m memory: 2Gi limits: cpu: 500m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi services: tls: false - name: kibana componentDef: kibana-8 serviceAccountName: kb-elastics-vpfimh replicas: 1 podUpdatePolicy: Recreate disableExporter: false resources: requests: cpu: 500m memory: 2Gi limits: cpu: 500m memory: 2Gi services: tls: false `kubectl apply -f test_create_elastics-vpfimh.yaml` cluster.apps.kubeblocks.io/elastics-vpfimh created apply test_create_elastics-vpfimh.yaml Success `rm -rf test_create_elastics-vpfimh.yaml` check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Jun 19,2025 18:16 UTC+0800 cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-11-25.us-west-2.compute.internal/172.31.11.25 Jun 19,2025 18:16 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:16 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-7-160.us-west-2.compute.internal/172.31.7.160 Jun 19,2025 18:16 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:16 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=elastics-vpfimh` set secret: elastics-vpfimh-mdit-account-elastic `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.username***"` `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.password***"` `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.port***"` DB_USERNAME:elastic;DB_PASSWORD:A11PH7S3j6;DB_PORT:9200;DB_DATABASE:elastic check pod elastics-vpfimh-mdit-0 container_name elasticsearch exist password A11PH7S3j6 check pod elastics-vpfimh-mdit-0 container_name exporter exist password A11PH7S3j6 check pod elastics-vpfimh-mdit-0 container_name lorry exist password A11PH7S3j6 No container logs contain secret password. describe cluster `kbcli cluster describe elastics-vpfimh --namespace ns-znapx ` Name: elastics-vpfimh Created Time: Jun 19,2025 18:16 UTC+0800 NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY ns-znapx Running DoNotTerminate Endpoints: COMPONENT MODE INTERNAL EXTERNAL mdit ReadWrite elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200 kibana ReadWrite elastics-vpfimh-kibana-http.ns-znapx.svc.cluster.local:5601 Topology: COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME kibana elastics-vpfimh-kibana-0 Running us-west-2a ip-172-31-11-25.us-west-2.compute.internal/172.31.11.25 Jun 19,2025 18:16 UTC+0800 mdit elastics-vpfimh-mdit-0 Running us-west-2a ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:16 UTC+0800 mdit elastics-vpfimh-mdit-1 Running us-west-2a ip-172-31-7-160.us-west-2.compute.internal/172.31.7.160 Jun 19,2025 18:16 UTC+0800 mdit elastics-vpfimh-mdit-2 Running us-west-2a ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:16 UTC+0800 Resources Allocation: COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS mdit false 500m / 500m 2Gi / 2Gi data:20Gi kb-default-sc kibana false 500m / 500m 2Gi / 2Gi Images: COMPONENT TYPE IMAGE mdit docker.io/apecloud/elasticsearch:8.9.1 kibana docker.io/apecloud/kibana:8.9.1 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-znapx elastics-vpfimh `kbcli cluster label elastics-vpfimh app.kubernetes.io/instance- --namespace ns-znapx ` label "app.kubernetes.io/instance" not found. `kbcli cluster label elastics-vpfimh app.kubernetes.io/instance=elastics-vpfimh --namespace ns-znapx ` `kbcli cluster label elastics-vpfimh --list --namespace ns-znapx ` NAME NAMESPACE LABELS elastics-vpfimh ns-znapx app.kubernetes.io/instance=elastics-vpfimh label cluster app.kubernetes.io/instance=elastics-vpfimh Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=elastics-vpfimh --namespace ns-znapx ` `kbcli cluster label elastics-vpfimh --list --namespace ns-znapx ` NAME NAMESPACE LABELS elastics-vpfimh ns-znapx app.kubernetes.io/instance=elastics-vpfimh case.name=kbcli.test1 label cluster case.name=kbcli.test1 Success `kbcli cluster label elastics-vpfimh case.name=kbcli.test2 --overwrite --namespace ns-znapx ` `kbcli cluster label elastics-vpfimh --list --namespace ns-znapx ` NAME NAMESPACE LABELS elastics-vpfimh ns-znapx app.kubernetes.io/instance=elastics-vpfimh case.name=kbcli.test2 label cluster case.name=kbcli.test2 Success `kbcli cluster label elastics-vpfimh case.name- --namespace ns-znapx ` `kbcli cluster label elastics-vpfimh --list --namespace ns-znapx ` NAME NAMESPACE LABELS elastics-vpfimh ns-znapx app.kubernetes.io/instance=elastics-vpfimh delete cluster label case.name Success cluster connect No resources found in ns-znapx namespace. `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` Defaulted container "elasticsearch" out of: elasticsearch, exporter, lorry, prepare-plugins (init), install-plugins (init), init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 470 100 470 0 0 52222 0 --:--:-- --:--:-- --:--:-- 52222 *** "cluster_name" : "elastics-vpfimh", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 17, "active_shards" : 35, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 *** connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-elastics-vpfimh" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-elastics-vpfimh --namespace ns-znapx ` Error from server (NotFound): pods "test-db-client-executionloop-elastics-vpfimh" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-elastics-vpfimh" not found `kubectl get secrets -l app.kubernetes.io/instance=elastics-vpfimh` set secret: elastics-vpfimh-mdit-account-elastic `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.username***"` `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.password***"` `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.port***"` DB_USERNAME:elastic;DB_PASSWORD:A11PH7S3j6;DB_PORT:9200;DB_DATABASE:elastic apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-elastics-vpfimh namespace: ns-znapx spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local" - "--user" - "elastic" - "--password" - "A11PH7S3j6" - "--port" - "9200" - "--dbtype" - "elasticsearch" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-elastics-vpfimh.yaml` pod/test-db-client-executionloop-elastics-vpfimh created apply test-db-client-executionloop-elastics-vpfimh.yaml Success `rm -rf test-db-client-executionloop-elastics-vpfimh.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 11s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 17s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 23s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 29s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 35s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 41s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 48s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 54s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 1/1 Running 0 60s check pod test-db-client-executionloop-elastics-vpfimh status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-vpfimh 0/1 Completed 0 66s check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Running Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-11-25.us-west-2.compute.internal/172.31.11.25 Jun 19,2025 18:16 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:16 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-7-160.us-west-2.compute.internal/172.31.7.160 Jun 19,2025 18:16 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:16 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done 10:24:57.805 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-0][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 10:24:57.805 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-0][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200] can be kept alive indefinitely 10:24:57.805 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][r:r]: Set timeout 0 10:24:57.805 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-0][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 10:24:57.805 [main] DEBUG org.elasticsearch.client.RestClient -- request [POST http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_doc] returned [HTTP/1.1 201 Created] 10:24:57.806 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] start execution 10:24:57.806 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 10:24:57.806 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [ACTIVE] [content length: 176; pos: 176; completed: true] 10:24:57.806 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200 10:24:57.806 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 3137] Request connection for ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200 10:24:57.806 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 10:24:57.807 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][r:r]: Set timeout 0 10:24:57.807 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-0][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 10:24:57.807 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 3137] Connection allocated: CPoolProxy***http-outgoing-0 [ACTIVE]*** 10:24:57.807 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 10:24:57.807 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][rw:r]: Event set [w] 10:24:57.807 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [ACTIVE] Request ready 10:24:57.807 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] Attempt 1 to execute request 10:24:57.807 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] Target auth state: UNCHALLENGED 10:24:57.807 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] Proxy auth state: UNCHALLENGED 10:24:57.807 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][rw:w]: Set timeout 30000 10:24:57.807 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> POST /executions_loop_index/_doc HTTP/1.1 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> X-Elastic-Client-Meta: es=8.8.2,jv=11,t=8.8.2,hl=2,hc=4.1.5,kt=1.9 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> User-Agent: elastic-java/8.8.2 (Java/11.0.16) 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> Accept: application/vnd.elasticsearch+json; compatible-with=8 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> Content-Length: 67 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> Content-Type: application/vnd.elasticsearch+json; compatible-with=8 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> Host: elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> Connection: Keep-Alive 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 >> Authorization: Basic ZWxhc3RpYzpBMTFQSDdTM2o2 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][rw:w]: Event set [w] 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [ACTIVE] Output ready 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] produce content 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] Request completed 10:24:57.808 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [ACTIVE] [content length: 67; pos: 67; completed: true] 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][rw:w]: 515 bytes written 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "POST /executions_loop_index/_doc HTTP/1.1[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "X-Elastic-Client-Meta: es=8.8.2,jv=11,t=8.8.2,hl=2,hc=4.1.5,kt=1.9[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "User-Agent: elastic-java/8.8.2 (Java/11.0.16)[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "Accept: application/vnd.elasticsearch+json; compatible-with=8[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "Content-Length: 67[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "Content-Type: application/vnd.elasticsearch+json; compatible-with=8[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "Host: elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "Authorization: Basic ZWxhc3RpYzpBMTFQSDdTM2o2[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "[\r][\n]" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 >> "***"id":"1750328697805","name":"executions_loop_3135","value":"3135"***" 10:24:57.809 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [ACTIVE] Request ready 10:24:57.810 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][r:w]: Event cleared [w] 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][r:r]: 383 bytes read 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 << "HTTP/1.1 201 Created[\r][\n]" 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 << "Location: /executions_loop_index/_doc/UZy4h5cBU0NdKRk1Y-nS[\r][\n]" 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 << "X-elastic-product: Elasticsearch[\r][\n]" 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 << "content-type: application/vnd.elasticsearch+json;compatible-with=8[\r][\n]" 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 << "content-length: 176[\r][\n]" 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 << "[\r][\n]" 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.wire -- http-outgoing-0 << "***"_index":"executions_loop_index","_id":"UZy4h5cBU0NdKRk1Y-nS","_version":1,"result":"created","_shards":***"total":2,"successful":2,"failed":0***,"_seq_no":3134,"_primary_term":1***" 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 << HTTP/1.1 201 Created 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 << Location: /executions_loop_index/_doc/UZy4h5cBU0NdKRk1Y-nS 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 << X-elastic-product: Elasticsearch 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 << content-type: application/vnd.elasticsearch+json;compatible-with=8 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.headers -- http-outgoing-0 << content-length: 176 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [ACTIVE(176)] Response received 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] Response received HTTP/1.1 201 Created 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [ACTIVE(176)] Input ready 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] Consume content 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 3137] Connection can be kept alive indefinitely 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 3137] Response processed 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 3137] releasing connection 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-0][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-0][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200] can be kept alive indefinitely 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][r:r]: Set timeout 0 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-0][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 10:24:57.896 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [ACTIVE] [content length: 176; pos: 176; completed: true] 10:24:57.897 [main] DEBUG org.elasticsearch.client.RestClient -- request [POST http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_doc] returned [HTTP/1.1 201 Created] [ 60s ] executions total: 3135 successful: 3135 failed: 0 disconnect: 0 10:24:57.898 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager is shutting down 10:24:57.898 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.14.188:44678<->10.100.234.156:9200[ACTIVE][r:r]: Close 10:24:57.899 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [CLOSED]: Disconnected 10:24:57.900 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager shut down Test Result: Total Executions: 3135 Successful Executions: 3135 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: elasticsearch Host: elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local Port: 9200 Database: Table: User: elastic Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 3135 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-elastics-vpfimh --namespace ns-znapx ` pod/test-db-client-executionloop-elastics-vpfimh patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-elastics-vpfimh" force deleted No resources found in ns-znapx namespace. `echo "curl -X POST 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/boss/_doc/1?pretty' -H 'Content-Type: application/json' -d '***\"datainsert\":\"hohrk\"***'" | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` Defaulted container "elasticsearch" out of: elasticsearch, exporter, lorry, prepare-plugins (init), install-plugins (init), init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 *** "_index" : "boss", "_id" : "1", "_version" : 1, "result" : "created", "_shards" : *** "total" : 2, "successful" : 1, "failed" : 0 ***, "_seq_no" : 0, "_primary_term" : 1 *** 100 220 100 198 100 22 313 34 --:--:-- --:--:-- --:--:-- 349 add consistent data hohrk Success cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale elastics-vpfimh --auto-approve --force=true --components mdit --cpu 600m --memory 2.1Gi --namespace ns-znapx ` OpsRequest elastics-vpfimh-verticalscaling-8fpjl created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-verticalscaling-8fpjl -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-verticalscaling-8fpjl ns-znapx VerticalScaling elastics-vpfimh mdit Running 0/3 Jun 19,2025 18:25 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-11-25.us-west-2.compute.internal/172.31.11.25 Jun 19,2025 18:16 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:32 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 18:28 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:26 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-verticalscaling-8fpjl ns-znapx VerticalScaling elastics-vpfimh mdit Succeed 3/3 Jun 19,2025 18:25 UTC+0800 check ops status done ops_status:elastics-vpfimh-verticalscaling-8fpjl ns-znapx VerticalScaling elastics-vpfimh mdit Succeed 3/3 Jun 19,2025 18:25 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-verticalscaling-8fpjl --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-verticalscaling-8fpjl patched `kbcli cluster delete-ops --name elastics-vpfimh-verticalscaling-8fpjl --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-verticalscaling-8fpjl deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success check component kibana exists `kubectl get components -l app.kubernetes.io/instance=elastics-vpfimh,apps.kubeblocks.io/component-name=kibana --namespace ns-znapx | (grep "kibana" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-vpfimh --auto-approve --force=true --components kibana --namespace ns-znapx ` OpsRequest elastics-vpfimh-restart-246xx created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-restart-246xx -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-restart-246xx ns-znapx Restart elastics-vpfimh kibana Running 0/1 Jun 19,2025 18:34 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-6-232.us-west-2.compute.internal/172.31.6.232 Jun 19,2025 18:34 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:32 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 18:28 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:26 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-restart-246xx ns-znapx Restart elastics-vpfimh kibana Succeed 1/1 Jun 19,2025 18:34 UTC+0800 check ops status done ops_status:elastics-vpfimh-restart-246xx ns-znapx Restart elastics-vpfimh kibana Succeed 1/1 Jun 19,2025 18:34 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-restart-246xx --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-restart-246xx patched `kbcli cluster delete-ops --name elastics-vpfimh-restart-246xx --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-restart-246xx deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success cluster hscale offline instances apiVersion: apps.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: elastics-vpfimh-hscaleoffinstance- labels: app.kubernetes.io/instance: elastics-vpfimh app.kubernetes.io/managed-by: kubeblocks namespace: ns-znapx spec: type: HorizontalScaling clusterName: elastics-vpfimh force: true horizontalScaling: - componentName: mdit scaleIn: onlineInstancesToOffline: - elastics-vpfimh-mdit-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_elastics-vpfimh.yaml` opsrequest.apps.kubeblocks.io/elastics-vpfimh-hscaleoffinstance-jkrm7 created create test_ops_cluster_elastics-vpfimh.yaml Success `rm -rf test_ops_cluster_elastics-vpfimh.yaml` check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-hscaleoffinstance-jkrm7 ns-znapx HorizontalScaling elastics-vpfimh mdit Running 0/1 Jun 19,2025 18:37 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-6-232.us-west-2.compute.internal/172.31.6.232 Jun 19,2025 18:34 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 18:28 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:26 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-1 --namespace ns-znapx -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-hscaleoffinstance-jkrm7 ns-znapx HorizontalScaling elastics-vpfimh mdit Succeed 1/1 Jun 19,2025 18:37 UTC+0800 check ops status done ops_status:elastics-vpfimh-hscaleoffinstance-jkrm7 ns-znapx HorizontalScaling elastics-vpfimh mdit Succeed 1/1 Jun 19,2025 18:37 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-hscaleoffinstance-jkrm7 --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-hscaleoffinstance-jkrm7 patched `kbcli cluster delete-ops --name elastics-vpfimh-hscaleoffinstance-jkrm7 --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-hscaleoffinstance-jkrm7 deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-1 --namespace ns-znapx -- sh ` check db_client batch data Success cluster hscale online instances apiVersion: apps.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: elastics-vpfimh-hscaleoninstance- labels: app.kubernetes.io/instance: elastics-vpfimh app.kubernetes.io/managed-by: kubeblocks namespace: ns-znapx spec: type: HorizontalScaling clusterName: elastics-vpfimh force: true horizontalScaling: - componentName: mdit scaleOut: offlineInstancesToOnline: - elastics-vpfimh-mdit-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_elastics-vpfimh.yaml` opsrequest.apps.kubeblocks.io/elastics-vpfimh-hscaleoninstance-wr8jh created create test_ops_cluster_elastics-vpfimh.yaml Success `rm -rf test_ops_cluster_elastics-vpfimh.yaml` check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-hscaleoninstance-wr8jh ns-znapx HorizontalScaling elastics-vpfimh mdit Running 0/1 Jun 19,2025 18:38 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-6-232.us-west-2.compute.internal/172.31.6.232 Jun 19,2025 18:34 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:38 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 18:38 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:26 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-hscaleoninstance-wr8jh ns-znapx HorizontalScaling elastics-vpfimh mdit Succeed 1/1 Jun 19,2025 18:38 UTC+0800 check ops status done ops_status:elastics-vpfimh-hscaleoninstance-wr8jh ns-znapx HorizontalScaling elastics-vpfimh mdit Succeed 1/1 Jun 19,2025 18:38 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-hscaleoninstance-wr8jh --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-hscaleoninstance-wr8jh patched `kbcli cluster delete-ops --name elastics-vpfimh-hscaleoninstance-wr8jh --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-hscaleoninstance-wr8jh deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop elastics-vpfimh --auto-approve --force=true --namespace ns-znapx ` OpsRequest elastics-vpfimh-stop-6rvkz created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-stop-6rvkz -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-stop-6rvkz ns-znapx Stop elastics-vpfimh kibana,mdit Running 0/4 Jun 19,2025 18:42 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Stopping Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-stop-6rvkz ns-znapx Stop elastics-vpfimh kibana,mdit Succeed 4/4 Jun 19,2025 18:42 UTC+0800 check ops status done ops_status:elastics-vpfimh-stop-6rvkz ns-znapx Stop elastics-vpfimh kibana,mdit Succeed 4/4 Jun 19,2025 18:42 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-stop-6rvkz --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-stop-6rvkz patched `kbcli cluster delete-ops --name elastics-vpfimh-stop-6rvkz --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-stop-6rvkz deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start elastics-vpfimh --force=true --namespace ns-znapx ` OpsRequest elastics-vpfimh-start-znk4v created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-start-znk4v -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-start-znk4v ns-znapx Start elastics-vpfimh kibana,mdit Running 0/4 Jun 19,2025 18:44 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:44 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-start-znk4v ns-znapx Start elastics-vpfimh kibana,mdit Succeed 4/4 Jun 19,2025 18:44 UTC+0800 check ops status done ops_status:elastics-vpfimh-start-znk4v ns-znapx Start elastics-vpfimh kibana,mdit Succeed 4/4 Jun 19,2025 18:44 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-start-znk4v --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-start-znk4v patched `kbcli cluster delete-ops --name elastics-vpfimh-start-znk4v --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-start-znk4v deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-vpfimh namespace. `kbcli cluster hscale elastics-vpfimh --auto-approve --force=true --components mdit --replicas 4 --namespace ns-znapx ` OpsRequest elastics-vpfimh-horizontalscaling-hpkvl created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-horizontalscaling-hpkvl -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-horizontalscaling-hpkvl ns-znapx HorizontalScaling elastics-vpfimh mdit Running 0/1 Jun 19,2025 18:48 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-3 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:48 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done No resources found in elastics-vpfimh namespace. check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-horizontalscaling-hpkvl ns-znapx HorizontalScaling elastics-vpfimh mdit Succeed 1/1 Jun 19,2025 18:48 UTC+0800 check ops status done ops_status:elastics-vpfimh-horizontalscaling-hpkvl ns-znapx HorizontalScaling elastics-vpfimh mdit Succeed 1/1 Jun 19,2025 18:48 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-horizontalscaling-hpkvl --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-horizontalscaling-hpkvl patched `kbcli cluster delete-ops --name elastics-vpfimh-horizontalscaling-hpkvl --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-horizontalscaling-hpkvl deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-vpfimh namespace. `kbcli cluster hscale elastics-vpfimh --auto-approve --force=true --components mdit --replicas 3 --namespace ns-znapx ` OpsRequest elastics-vpfimh-horizontalscaling-f9s6j created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-horizontalscaling-f9s6j -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-horizontalscaling-f9s6j ns-znapx HorizontalScaling elastics-vpfimh mdit Running 0/1 Jun 19,2025 18:50 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:44 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done No resources found in elastics-vpfimh namespace. check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-horizontalscaling-f9s6j ns-znapx HorizontalScaling elastics-vpfimh mdit Succeed 1/1 Jun 19,2025 18:50 UTC+0800 check ops status done ops_status:elastics-vpfimh-horizontalscaling-f9s6j ns-znapx HorizontalScaling elastics-vpfimh mdit Succeed 1/1 Jun 19,2025 18:50 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-horizontalscaling-f9s6j --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-horizontalscaling-f9s6j patched `kbcli cluster delete-ops --name elastics-vpfimh-horizontalscaling-f9s6j --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-horizontalscaling-f9s6j deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-vpfimh --auto-approve --force=true --components mdit --namespace ns-znapx ` OpsRequest elastics-vpfimh-restart-76rvd created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-restart-76rvd -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-restart-76rvd ns-znapx Restart elastics-vpfimh mdit Running 0/3 Jun 19,2025 18:51 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 500m / 500m 2Gi / 2Gi ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 18:44 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 18:57 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 18:54 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:52 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-restart-76rvd ns-znapx Restart elastics-vpfimh mdit Succeed 3/3 Jun 19,2025 18:51 UTC+0800 check ops status done ops_status:elastics-vpfimh-restart-76rvd ns-znapx Restart elastics-vpfimh mdit Succeed 3/3 Jun 19,2025 18:51 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-restart-76rvd --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-restart-76rvd patched `kbcli cluster delete-ops --name elastics-vpfimh-restart-76rvd --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-restart-76rvd deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success check component kibana exists `kubectl get components -l app.kubernetes.io/instance=elastics-vpfimh,apps.kubeblocks.io/component-name=kibana --namespace ns-znapx | (grep "kibana" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale elastics-vpfimh --auto-approve --force=true --components kibana --cpu 600m --memory 2.1Gi --namespace ns-znapx ` OpsRequest elastics-vpfimh-verticalscaling-gx6ss created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-verticalscaling-gx6ss -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-verticalscaling-gx6ss ns-znapx VerticalScaling elastics-vpfimh kibana Running 0/1 Jun 19,2025 19:00 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 19:00 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 18:57 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 18:54 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 18:52 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-verticalscaling-gx6ss ns-znapx VerticalScaling elastics-vpfimh kibana Succeed 1/1 Jun 19,2025 19:00 UTC+0800 check ops status done ops_status:elastics-vpfimh-verticalscaling-gx6ss ns-znapx VerticalScaling elastics-vpfimh kibana Succeed 1/1 Jun 19,2025 19:00 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-verticalscaling-gx6ss --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-verticalscaling-gx6ss patched `kbcli cluster delete-ops --name elastics-vpfimh-verticalscaling-gx6ss --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-verticalscaling-gx6ss deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-vpfimh --auto-approve --force=true --namespace ns-znapx ` OpsRequest elastics-vpfimh-restart-lcgxc created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-restart-lcgxc -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-restart-lcgxc ns-znapx Restart elastics-vpfimh mdit,kibana Creating -/- Jun 19,2025 19:02 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 19:03 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 19:08 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 19:05 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:20Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 19:03 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-restart-lcgxc ns-znapx Restart elastics-vpfimh mdit,kibana Succeed 4/4 Jun 19,2025 19:02 UTC+0800 check ops status done ops_status:elastics-vpfimh-restart-lcgxc ns-znapx Restart elastics-vpfimh mdit,kibana Succeed 4/4 Jun 19,2025 19:02 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-restart-lcgxc --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-restart-lcgxc patched `kbcli cluster delete-ops --name elastics-vpfimh-restart-lcgxc --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-restart-lcgxc deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success `kubectl get pvc -l app.kubernetes.io/instance=elastics-vpfimh,apps.kubeblocks.io/component-name=mdit,apps.kubeblocks.io/vct-name=data --namespace ns-znapx ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-vpfimh namespace. `kbcli cluster volume-expand elastics-vpfimh --auto-approve --force=true --components mdit --volume-claim-templates data --storage 25Gi --namespace ns-znapx ` OpsRequest elastics-vpfimh-volumeexpansion-27fqq created successfully, you can view the progress: kbcli cluster describe-ops elastics-vpfimh-volumeexpansion-27fqq -n ns-znapx check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-volumeexpansion-27fqq ns-znapx VolumeExpansion elastics-vpfimh mdit Running 0/3 Jun 19,2025 19:11 UTC+0800 check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Updating Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 19:03 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 19:08 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 19:05 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 19:03 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done No resources found in elastics-vpfimh namespace. check ops status `kbcli cluster list-ops elastics-vpfimh --status all --namespace ns-znapx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-vpfimh-volumeexpansion-27fqq ns-znapx VolumeExpansion elastics-vpfimh mdit Succeed 3/3 Jun 19,2025 19:11 UTC+0800 check ops status done ops_status:elastics-vpfimh-volumeexpansion-27fqq ns-znapx VolumeExpansion elastics-vpfimh mdit Succeed 3/3 Jun 19,2025 19:11 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests elastics-vpfimh-volumeexpansion-27fqq --namespace ns-znapx ` opsrequest.apps.kubeblocks.io/elastics-vpfimh-volumeexpansion-27fqq patched `kbcli cluster delete-ops --name elastics-vpfimh-volumeexpansion-27fqq --force --auto-approve --namespace ns-znapx ` OpsRequest elastics-vpfimh-volumeexpansion-27fqq deleted No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success test failover connectionstress check node drain check node drain success Error from server (NotFound): pods "test-db-client-connectionstress-elastics-vpfimh" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-elastics-vpfimh --namespace ns-znapx ` Error from server (NotFound): pods "test-db-client-connectionstress-elastics-vpfimh" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-elastics-vpfimh" not found `kubectl get secrets -l app.kubernetes.io/instance=elastics-vpfimh` set secret: elastics-vpfimh-mdit-account-elastic `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.username***"` `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.password***"` `kubectl get secrets elastics-vpfimh-mdit-account-elastic -o jsonpath="***.data.port***"` DB_USERNAME:elastic;DB_PASSWORD:A11PH7S3j6;DB_PORT:9200;DB_DATABASE:elastic apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-elastics-vpfimh namespace: ns-znapx spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local" - "--user" - "elastic" - "--password" - "A11PH7S3j6" - "--port" - "9200" - "--database" - "elastic" - "--dbtype" - "elasticsearch" - "--test" - "connectionstress" - "--connections" - "1024" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-elastics-vpfimh.yaml` pod/test-db-client-connectionstress-elastics-vpfimh created apply test-db-client-connectionstress-elastics-vpfimh.yaml Success `rm -rf test-db-client-connectionstress-elastics-vpfimh.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-elastics-vpfimh 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-elastics-vpfimh 1/1 Running 0 11s check pod test-db-client-connectionstress-elastics-vpfimh status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-elastics-vpfimh 0/1 Completed 0 18s check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx DoNotTerminate Running Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 19:03 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 19:08 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 19:05 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 19:03 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done 11:12:59.608 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] start execution 11:12:59.608 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 11:12:59.609 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200 11:12:59.609 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- No credentials for preemptive authentication 11:12:59.609 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] Request connection for ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200 11:12:59.609 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30] 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-1023][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 0 of 30] 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] Connection allocated: CPoolProxy***http-outgoing-1023 [ACTIVE]*** 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][r:]: Set attribute http.nio.exchange-handler 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][rw:]: Event set [w] 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][rw:]: Set timeout 0 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE]: Connected 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][rw:]: Set attribute http.nio.http-exchange-state 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] Start connection routing 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] route completed 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Connection route established 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Attempt 1 to execute request 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Target auth state: UNCHALLENGED 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Proxy auth state: UNCHALLENGED 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][rw:]: Set timeout 30000 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> GET /_cluster/health?pretty=true HTTP/1.1 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> X-Elastic-Client-Meta: es=8.8.2,jv=11,t=8.8.2,hc=4.1.5,kt=1.9 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> Content-Length: 0 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> Host: elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> Connection: Keep-Alive 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> User-Agent: elasticsearch-java/8.8.2 (Java/11.0.16) 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][rw:]: Event set [w] 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Request completed 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][rw:w]: 269 bytes written 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "GET /_cluster/health?pretty=true HTTP/1.1[\r][\n]" 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "X-Elastic-Client-Meta: es=8.8.2,jv=11,t=8.8.2,hc=4.1.5,kt=1.9[\r][\n]" 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "Content-Length: 0[\r][\n]" 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "Host: elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200[\r][\n]" 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "Connection: Keep-Alive[\r][\n]" 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "User-Agent: elasticsearch-java/8.8.2 (Java/11.0.16)[\r][\n]" 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "[\r][\n]" 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE] Request ready 11:12:59.610 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][r:w]: Event cleared [w] 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][r:r]: 576 bytes read 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "HTTP/1.1 200 OK[\r][\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "X-elastic-product: Elasticsearch[\r][\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "content-type: application/json[\r][\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "content-length: 470[\r][\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "[\r][\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "***[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "cluster_name" : "elastics-vpfimh",[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "status" : "green",[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "timed_out" : false,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "number_of_nodes" : 3,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "number_of_data_nodes" : 3,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "active_primary_shards" : 19,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "active_shards" : 39,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "relocating_shards" : 0,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "initializing_shards" : 0,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "unassigned_shards" : 0,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "delayed_unassigned_shards" : 0,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "number_of_pending_tasks" : 0,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "number_of_in_flight_fetch" : 0,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "task_max_waiting_in_queue_millis" : 0,[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "active_shards_percent_as_number" : 100.0[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "***[\n]" 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 << HTTP/1.1 200 OK 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 << X-elastic-product: Elasticsearch 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 << content-type: application/json 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 << content-length: 470 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE(470)] Response received 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Response received HTTP/1.1 200 OK 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE(470)] Input ready 11:12:59.612 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Consume content 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] Connection can be kept alive indefinitely 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Response processed 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] releasing connection 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-1023][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-1023][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200] can be kept alive indefinitely 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][r:r]: Set timeout 0 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-1023][route: ***->http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE] [content length: 470; pos: 470; completed: true] 11:12:59.613 [main] DEBUG org.elasticsearch.client.RestClient -- request [GET http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health?pretty=true] returned [HTTP/1.1 200 OK] 11:12:59.613 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager is shutting down 11:12:59.613 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.253:60554<->10.100.234.156:9200[ACTIVE][r:r]: Close 11:12:59.613 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [CLOSED]: Disconnected 11:12:59.614 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager shut down 11:12:59.614 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager is shutting down 11:12:59.614 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager shut down Test Result: Created 1024 connections Connection Information: Database Type: elasticsearch Host: elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local Port: 9200 Database: elastic Table: User: elastic Org: Access Mode: mysql Test Type: connectionstress Connection Count: 1024 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-elastics-vpfimh --namespace ns-znapx ` pod/test-db-client-connectionstress-elastics-vpfimh patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-elastics-vpfimh" force deleted check failover pod name failover pod name:elastics-vpfimh-mdit-0 failover connectionstress Success No resources found in ns-znapx namespace. check db_client batch data count `echo "curl -X GET 'elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh ` check db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update elastics-vpfimh --termination-policy=WipeOut --namespace ns-znapx ` cluster.apps.kubeblocks.io/elastics-vpfimh updated check cluster status `kbcli cluster list elastics-vpfimh --show-labels --namespace ns-znapx ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-vpfimh ns-znapx WipeOut Running Jun 19,2025 18:16 UTC+0800 app.kubernetes.io/instance=elastics-vpfimh check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-vpfimh --namespace ns-znapx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-vpfimh-kibana-0 ns-znapx elastics-vpfimh kibana Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m ip-172-31-14-205.us-west-2.compute.internal/172.31.14.205 Jun 19,2025 19:03 UTC+0800 elastics-vpfimh-mdit-0 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-5-129.us-west-2.compute.internal/172.31.5.129 Jun 19,2025 19:08 UTC+0800 elastics-vpfimh-mdit-1 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-10-248.us-west-2.compute.internal/172.31.10.248 Jun 19,2025 19:05 UTC+0800 elastics-vpfimh-mdit-2 ns-znapx elastics-vpfimh mdit Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:25Gi ip-172-31-2-15.us-west-2.compute.internal/172.31.2.15 Jun 19,2025 19:03 UTC+0800 check pod status done No resources found in ns-znapx namespace. check cluster connect `echo "curl http://elastics-vpfimh-mdit-http.ns-znapx.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-vpfimh-mdit-0 --namespace ns-znapx -- sh` check cluster connect done cluster list-logs `kbcli cluster list-logs elastics-vpfimh --namespace ns-znapx ` No log files found. You can enable the log feature with the kbcli command below. kbcli cluster update elastics-vpfimh --enable-all-logs=true --namespace ns-znapx Error from server (NotFound): pods "elastics-vpfimh-mdit-0" not found cluster logs `kbcli cluster logs elastics-vpfimh --tail 30 --namespace ns-znapx ` [2025-06-19T11:04:08.707+00:00][INFO ][plugins.alerting] Installing component template .alerts-ecs-mappings [2025-06-19T11:04:08.718+00:00][INFO ][plugins.ruleRegistry] Installing component template .alerts-technical-mappings [2025-06-19T11:04:15.237+00:00][INFO ][http.server.Kibana] http server running at http://172.31.11.86:5601 [2025-06-19T11:04:15.326+00:00][INFO ][plugins.fleet] Task Fleet-Usage-Logger-Task scheduled with interval 15m [2025-06-19T11:04:15.393+00:00][INFO ][status] Kibana is now degraded [2025-06-19T11:04:15.417+00:00][INFO ][plugins.monitoring.monitoring.kibana-monitoring] Starting monitoring stats collection [2025-06-19T11:04:15.631+00:00][INFO ][plugins.ruleRegistry] Installing ILM policy .preview.alerts-security.alerts-policy [2025-06-19T11:04:15.995+00:00][INFO ][plugins.alerting] Installing component template .alerts-observability.slo.alerts-mappings [2025-06-19T11:04:15.998+00:00][INFO ][plugins.alerting] Installing component template .alerts-observability.uptime.alerts-mappings [2025-06-19T11:04:16.096+00:00][INFO ][plugins.alerting] Installing component template .alerts-security.alerts-mappings [2025-06-19T11:04:16.100+00:00][INFO ][plugins.alerting] Installing component template .alerts-observability.logs.alerts-mappings [2025-06-19T11:04:16.102+00:00][INFO ][plugins.alerting] Installing component template .alerts-observability.metrics.alerts-mappings [2025-06-19T11:04:16.105+00:00][INFO ][plugins.alerting] Installing component template .alerts-observability.apm.alerts-mappings [2025-06-19T11:04:16.208+00:00][INFO ][plugins.alerting] Installing index template .alerts-observability.slo.alerts-default-index-template [2025-06-19T11:04:16.211+00:00][INFO ][plugins.ruleRegistry] Installing component template .preview.alerts-security.alerts-mappings [2025-06-19T11:04:16.703+00:00][INFO ][plugins.alerting] Installing index template .alerts-observability.uptime.alerts-default-index-template [2025-06-19T11:04:16.801+00:00][INFO ][plugins.alerting] Installing index template .alerts-security.alerts-default-index-template [2025-06-19T11:04:16.892+00:00][INFO ][plugins.alerting] Installing index template .alerts-observability.metrics.alerts-default-index-template [2025-06-19T11:04:16.898+00:00][INFO ][plugins.alerting] Installing index template .alerts-observability.logs.alerts-default-index-template [2025-06-19T11:04:16.902+00:00][INFO ][plugins.alerting] Installing index template .alerts-observability.apm.alerts-default-index-template [2025-06-19T11:04:18.619+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/node_modules/@kbn/screenshotting-plugin/chromium/headless_shell-linux_x64/headless_shell [2025-06-19T11:04:19.026+00:00][INFO ][plugins.alerting] Creating concrete write index - .internal.alerts-observability.logs.alerts-default-000001 [2025-06-19T11:04:19.028+00:00][INFO ][plugins.alerting] Creating concrete write index - .internal.alerts-security.alerts-default-000001 [2025-06-19T11:04:19.030+00:00][INFO ][plugins.alerting] Creating concrete write index - .internal.alerts-observability.slo.alerts-default-000001 [2025-06-19T11:04:19.032+00:00][INFO ][plugins.alerting] Creating concrete write index - .internal.alerts-observability.uptime.alerts-default-000001 [2025-06-19T11:04:19.033+00:00][INFO ][plugins.alerting] Creating concrete write index - .internal.alerts-observability.apm.alerts-default-000001 [2025-06-19T11:04:19.034+00:00][INFO ][plugins.alerting] Creating concrete write index - .internal.alerts-observability.metrics.alerts-default-000001 [2025-06-19T11:04:21.364+00:00][INFO ][status] Kibana is now available (was degraded) Error: 6-19T11:05:48.537+00:00][ERROR][plugins.taskManager] Task reports:monitor "reports:monitor" failed: ConnectionError: read ECONNRESET - Local: unknown:unknown, Remote: unknown:unknown [2025-06-19T11:07:39.095+00:00][INFO ][plugins.fleet] Fleet Usage: ***"agents_enabled":true,"agents":***"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"inactive":0,"unenrolled":0,"total_all_statuses":0,"updating":0***,"fleet_server":***"total_all_statuses":0,"total_enrolled":0,"healthy":0,"unhealthy":0,"offline":0,"updating":0,"num_host_urls":0*** delete cluster elastics-vpfimh `kbcli cluster delete elastics-vpfimh --auto-approve --namespace ns-znapx ` Cluster elastics-vpfimh deleted pod_info:elastics-vpfimh-kibana-0 1/1 Terminating 0 10m elastics-vpfimh-mdit-0 3/3 Running 0 5m20s elastics-vpfimh-mdit-1 3/3 Terminating 0 8m2s elastics-vpfimh-mdit-2 3/3 Running 0 10m pod_info:elastics-vpfimh-kibana-0 1/1 Terminating 0 11m elastics-vpfimh-mdit-0 2/3 Terminating 0 5m41s elastics-vpfimh-mdit-1 2/3 Terminating 0 8m23s elastics-vpfimh-mdit-2 2/3 Terminating 0 11m No resources found in ns-znapx namespace. delete cluster pod done No resources found in ns-znapx namespace. check cluster resource non-exist OK: pvc No resources found in ns-znapx namespace. delete cluster done No resources found in ns-znapx namespace. No resources found in ns-znapx namespace. No resources found in ns-znapx namespace. ElasticSearch Test Suite All Done! --------------------------------------ElasticSearch (Topology = multi-node Replicas 3) Test Result-------------------------------------- [PASSED]|[Create]|[Topology=multi-node;ComponentVersion=elasticsearch;ServiceVersion=8.1.3;]|[Description=Create a cluster with the specified topology multi-node and component version elasticsearch and service version 8.1.3] [PASSED]|[Connect]|[ComponentName=mdit]|[Description=Connect to the cluster] [PASSED]|[AddData]|[Values=hohrk]|[Description=Add data to the cluster] [PASSED]|[VerticalScaling]|[ComponentName=mdit]|[Description=VerticalScaling the cluster specify component mdit] [PASSED]|[Restart]|[ComponentName=kibana]|[Description=Restart the cluster specify component kibana] [PASSED]|[HscaleOfflineInstances]|[ComponentName=mdit]|[Description=Hscale the cluster instances offline specify component mdit] [PASSED]|[HscaleOnlineInstances]|[ComponentName=mdit]|[Description=Hscale the cluster instances online specify component mdit] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[HorizontalScaling Out]|[ComponentName=mdit]|[Description=HorizontalScaling Out the cluster specify component mdit] [PASSED]|[HorizontalScaling In]|[ComponentName=mdit]|[Description=HorizontalScaling In the cluster specify component mdit] [PASSED]|[Restart]|[ComponentName=mdit]|[Description=Restart the cluster specify component mdit] [PASSED]|[VerticalScaling]|[ComponentName=kibana]|[Description=VerticalScaling the cluster specify component kibana] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[VolumeExpansion]|[ComponentName=mdit]|[Description=VolumeExpansion the cluster specify component mdit] [PASSED]|[Failover]|[HA=Connection Stress;ComponentName=mdit]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]