source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-bpqhv ` `kubectl create namespace ns-bpqhv` namespace/ns-bpqhv created create namespace ns-bpqhv done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.0` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 33.6M 100 33.6M 0 0 121M 0 --:--:-- --:--:-- --:--:-- 121M kbcli installed successfully. Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 1.0.0 kbcli: 1.0.0 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.0 done Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 1.0.0 kbcli: 1.0.0 Kubernetes Env: v1.32.5-eks-5d4a308 check snapshot controller check snapshot controller done eks default-vsc found POD_RESOURCES: No resources found found default storage class: gp3 KubeBlocks version is:1.0.0 skip upgrade KubeBlocks current KubeBlocks version: 1.0.0 Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:mdit set component version set component version:elasticsearch set service versions:8.8.2,8.1.3,7.10.1,7.8.1,7.7.1 set service versions sorted:7.7.1,7.8.1,7.10.1,8.1.3,8.8.2 set elasticsearch component definition set elasticsearch component definition elasticsearch-8-1.0.0-alpha.0 set replicas first:3,7.7.1|3,7.8.1|3,7.10.1|3,8.1.3|3,8.8.2 set replicas third:3,7.8.1 set replicas fourth:3,7.7.1 set minimum cmpv service version set minimum cmpv service version replicas:3,7.7.1 REPORT_COUNT:1 CLUSTER_TOPOLOGY:multi-node topology multi-node found in cluster definition elasticsearch set elasticsearch component definition set elasticsearch component definition elasticsearch-7-1.0.0-alpha.0 LIMIT_CPU:0.5 LIMIT_MEMORY:2 storage size: 20 No resources found in ns-bpqhv namespace. termination_policy:WipeOut create 3 replica WipeOut elasticsearch cluster check component definition set component definition by component version check cmpd by labels set component definition1: elasticsearch-7-1.0.0-alpha.0 by component version:elasticsearch apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: elastics-yurtvt namespace: ns-bpqhv spec: terminationPolicy: WipeOut componentSpecs: - name: master componentDef: elasticsearch-7-1.0.0-alpha.0 serviceVersion: 7.7.1 configs: - name: es-cm variables: version: 7.7.1 roles: master replicas: 3 resources: requests: cpu: 500m memory: 2Gi limits: cpu: 500m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - name: data componentDef: elasticsearch-7-1.0.0-alpha.0 serviceVersion: 7.7.1 configs: - name: es-cm variables: version: 7.7.1 roles: data,ingest,transform replicas: 3 resources: requests: cpu: 500m memory: 2Gi limits: cpu: 500m memory: 2Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi `kubectl apply -f test_create_elastics-yurtvt.yaml` cluster.apps.kubeblocks.io/elastics-yurtvt created apply test_create_elastics-yurtvt.yaml Success `rm -rf test_create_elastics-yurtvt.yaml` check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Creating May 28,2025 11:34 UTC+0800 cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=elastics-yurtvt` No resources found in ns-bpqhv namespace. Not found cluster secret DB_USERNAME:;DB_PASSWORD:;DB_PORT:9200;DB_DATABASE:elastic There is no password in Type: 25. describe cluster `kbcli cluster describe elastics-yurtvt --namespace ns-bpqhv ` Name: elastics-yurtvt Created Time: May 28,2025 11:34 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-bpqhv Running WipeOut Endpoints: COMPONENT INTERNAL EXTERNAL master elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 data elastics-yurtvt-data-http.ns-bpqhv.svc.cluster.local:9200 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME data 7.7.1 elastics-yurtvt-data-0 Running us-west-2a ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 11:34 UTC+0800 data 7.7.1 elastics-yurtvt-data-1 Running us-west-2a ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 11:34 UTC+0800 data 7.7.1 elastics-yurtvt-data-2 Running us-west-2a ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 11:34 UTC+0800 master 7.7.1 elastics-yurtvt-master-0 Running us-west-2a ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 master 7.7.1 elastics-yurtvt-master-1 Running us-west-2a ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 master 7.7.1 elastics-yurtvt-master-2 Running us-west-2a ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS master 500m / 500m 2Gi / 2Gi data:20Gi kb-default-sc data 500m / 500m 2Gi / 2Gi data:20Gi kb-default-sc Images: COMPONENT COMPONENT-DEFINITION IMAGE master elasticsearch-7-1.0.0-alpha.0 docker.io/apecloud/elasticsearch:7.7.1 docker.io/apecloud/elasticsearch-exporter:v1.7.0 docker.io/apecloud/curl-jq:0.1.0 data elasticsearch-7-1.0.0-alpha.0 docker.io/apecloud/elasticsearch:7.7.1 docker.io/apecloud/elasticsearch-exporter:v1.7.0 docker.io/apecloud/curl-jq:0.1.0 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-bpqhv elastics-yurtvt `kbcli cluster label elastics-yurtvt app.kubernetes.io/instance- --namespace ns-bpqhv ` label "app.kubernetes.io/instance" not found. `kbcli cluster label elastics-yurtvt app.kubernetes.io/instance=elastics-yurtvt --namespace ns-bpqhv ` `kbcli cluster label elastics-yurtvt --list --namespace ns-bpqhv ` NAME NAMESPACE LABELS elastics-yurtvt ns-bpqhv app.kubernetes.io/instance=elastics-yurtvt label cluster app.kubernetes.io/instance=elastics-yurtvt Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=elastics-yurtvt --namespace ns-bpqhv ` `kbcli cluster label elastics-yurtvt --list --namespace ns-bpqhv ` NAME NAMESPACE LABELS elastics-yurtvt ns-bpqhv app.kubernetes.io/instance=elastics-yurtvt case.name=kbcli.test1 label cluster case.name=kbcli.test1 Success `kbcli cluster label elastics-yurtvt case.name=kbcli.test2 --overwrite --namespace ns-bpqhv ` `kbcli cluster label elastics-yurtvt --list --namespace ns-bpqhv ` NAME NAMESPACE LABELS elastics-yurtvt ns-bpqhv app.kubernetes.io/instance=elastics-yurtvt case.name=kbcli.test2 label cluster case.name=kbcli.test2 Success `kbcli cluster label elastics-yurtvt case.name- --namespace ns-bpqhv ` `kbcli cluster label elastics-yurtvt --list --namespace ns-bpqhv ` NAME NAMESPACE LABELS elastics-yurtvt ns-bpqhv app.kubernetes.io/instance=elastics-yurtvt delete cluster label case.name Success cluster connect No resources found in ns-bpqhv namespace. `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh ` Defaulted container "elasticsearch" out of: elasticsearch, exporter, kbagent, prepare-plugins (init), install-plugins (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 461 100 461 0 0 9518 0 --:--:-- --:--:-- --:--:-- 9604 *** "cluster_name" : "ns-bpqhv", "status" : "green", "timed_out" : false, "number_of_nodes" : 6, "number_of_data_nodes" : 6, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 *** connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-elastics-yurtvt" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-elastics-yurtvt --namespace ns-bpqhv ` Error from server (NotFound): pods "test-db-client-executionloop-elastics-yurtvt" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-elastics-yurtvt" not found `kubectl get secrets -l app.kubernetes.io/instance=elastics-yurtvt` No resources found in ns-bpqhv namespace. Not found cluster secret DB_USERNAME:;DB_PASSWORD:;DB_PORT:9200;DB_DATABASE:elastic apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-elastics-yurtvt namespace: ns-bpqhv spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local" - "--user" - "" - "--password" - "" - "--port" - "9200" - "--dbtype" - "elasticsearch7" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-elastics-yurtvt.yaml` pod/test-db-client-executionloop-elastics-yurtvt created apply test-db-client-executionloop-elastics-yurtvt.yaml Success `rm -rf test-db-client-executionloop-elastics-yurtvt.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 0/1 ContainerCreating 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 0/1 ContainerCreating 0 11s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 17s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 23s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 29s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 35s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 41s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 47s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 53s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 58s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 64s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 70s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 76s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 82s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 88s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 94s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 100s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 106s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 112s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 117s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m3s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m9s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m15s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m21s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m27s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m33s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m39s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m45s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m50s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 2m56s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m2s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m8s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m14s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m20s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m26s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m32s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m38s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m44s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m50s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 3m55s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m1s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m7s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m13s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m19s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m25s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m31s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m37s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m43s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m49s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 4m55s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m1s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m7s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m13s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m18s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m24s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m30s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m36s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m42s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m48s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 5m54s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-elastics-yurtvt 1/1 Running 0 6m check pod test-db-client-executionloop-elastics-yurtvt status timeout --------------------------------------get pod test-db-client-executionloop-elastics-yurtvt yaml-------------------------------------- `kubectl get pod test-db-client-executionloop-elastics-yurtvt -o yaml --namespace ns-bpqhv ` apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | ***"apiVersion":"v1","kind":"Pod","metadata":***"annotations":***,"name":"test-db-client-executionloop-elastics-yurtvt","namespace":"ns-bpqhv"***,"spec":***"containers":[***"args":["--host","elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local","--user","","--password","","--port","9200","--dbtype","elasticsearch7","--test","executionloop","--duration","60","--interval","1"],"image":"docker.io/apecloud/dbclient:test","imagePullPolicy":"IfNotPresent","name":"test-dbclient"***],"restartPolicy":"Never"*** creationTimestamp: "2025-05-28T03:37:43Z" name: test-db-client-executionloop-elastics-yurtvt namespace: ns-bpqhv resourceVersion: "18421" uid: 3d738ca5-3e93-46cb-b7a3-a58306e93ac4 spec: containers: - args: - --host - elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local - --user - "" - --password - "" - --port - "9200" - --dbtype - elasticsearch7 - --test - executionloop - --duration - "60" - --interval - "1" image: docker.io/apecloud/dbclient:test imagePullPolicy: IfNotPresent name: test-dbclient resources: *** terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-cg825 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ip-172-31-15-233.us-west-2.compute.internal preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: *** serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-cg825 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2025-05-28T03:37:56Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2025-05-28T03:37:43Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2025-05-28T03:37:56Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2025-05-28T03:37:56Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2025-05-28T03:37:43Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://24292b316b8a8293479be08fbbd406ad553fedc81044bca40b01c62694433313 image: docker.io/apecloud/dbclient:test imageID: docker.io/apecloud/dbclient@sha256:0fabe76d4616f63b301f9e8b4fbf4db854bba63edf137d70a52c4a533f99b77f lastState: *** name: test-dbclient ready: true restartCount: 0 started: true state: running: startedAt: "2025-05-28T03:37:55Z" volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-cg825 readOnly: true recursiveReadOnly: Disabled hostIP: 172.31.15.233 hostIPs: - ip: 172.31.15.233 phase: Running podIP: 172.31.3.57 podIPs: - ip: 172.31.3.57 qosClass: BestEffort startTime: "2025-05-28T03:37:43Z" ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe pod test-db-client-executionloop-elastics-yurtvt-------------------------------------- `kubectl describe pod test-db-client-executionloop-elastics-yurtvt --namespace ns-bpqhv ` Name: test-db-client-executionloop-elastics-yurtvt Namespace: ns-bpqhv Priority: 0 Service Account: default Node: ip-172-31-15-233.us-west-2.compute.internal/172.31.15.233 Start Time: Wed, 28 May 2025 11:37:43 +0800 Labels: Annotations: Status: Running IP: 172.31.3.57 IPs: IP: 172.31.3.57 Containers: test-dbclient: Container ID: containerd://24292b316b8a8293479be08fbbd406ad553fedc81044bca40b01c62694433313 Image: docker.io/apecloud/dbclient:test Image ID: docker.io/apecloud/dbclient@sha256:0fabe76d4616f63b301f9e8b4fbf4db854bba63edf137d70a52c4a533f99b77f Port: Host Port: Args: --host elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local --user --password --port 9200 --dbtype elasticsearch7 --test executionloop --duration 60 --interval 1 State: Running Started: Wed, 28 May 2025 11:37:55 +0800 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cg825 (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-cg825: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m3s default-scheduler Successfully assigned ns-bpqhv/test-db-client-executionloop-elastics-yurtvt to ip-172-31-15-233.us-west-2.compute.internal Normal Pulling 6m2s kubelet Pulling image "docker.io/apecloud/dbclient:test" Normal Pulled 5m51s kubelet Successfully pulled image "docker.io/apecloud/dbclient:test" in 11.206s (11.206s including waiting). Image size: 416141779 bytes. Normal Created 5m51s kubelet Created container: test-dbclient Normal Started 5m51s kubelet Started container test-dbclient ------------------------------------------------------------------------------------------------------------------ --------------------------------------pod test-db-client-executionloop-elastics-yurtvt-------------------------------------- `kubectl logs test-db-client-executionloop-elastics-yurtvt --namespace ns-bpqhv --tail 500` 03:38:54.174 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-55 [ACTIVE(455)] Response received 03:38:54.174 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 112] Response received HTTP/1.1 404 Not Found 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-55 [ACTIVE(455)] Input ready 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 112] Consume content 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 112] Connection can be kept alive indefinitely 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 112] Response processed 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 112] releasing connection 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-55 172.31.3.57:58390<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-55][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-55][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-55 172.31.3.57:58390<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-55][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:54.175 [I/O dispatcher 56] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-55 [ACTIVE] [content length: 455; pos: 455; completed: true] 03:38:54.175 [main] DEBUG org.opensearch.client.RestClient -- request [DELETE http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Execution loop failed: method [DELETE], host [http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200], URI [/executions_loop_index], status line [HTTP/1.1 404 Not Found] ***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***,"status":404*** [ 55s ] executions total: 56 successful: 0 failed: 56 disconnect: 1 03:38:55.180 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 113] start execution 03:38:55.181 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:55.181 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:55.181 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 113] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:55.181 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30] 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-56][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 0 of 30] 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 113] Connection allocated: CPoolProxy***http-outgoing-56 [ACTIVE]*** 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:]: Set attribute http.nio.exchange-handler 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 0 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-56 [ACTIVE]: Connected 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:]: Set attribute http.nio.http-exchange-state 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 113] Start connection routing 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 113] route completed 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 113] Connection route established 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 113] Attempt 1 to execute request 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 113] Target auth state: UNCHALLENGED 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 113] Proxy auth state: UNCHALLENGED 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 30000 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> HEAD /executions_loop_index HTTP/1.1 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> Connection: Keep-Alive 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> Authorization: Basic Og== 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 03:38:55.183 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 113] Request completed 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:w]: 215 bytes written 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "HEAD /executions_loop_index HTTP/1.1[\r][\n]" 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "Connection: Keep-Alive[\r][\n]" 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "Authorization: Basic Og==[\r][\n]" 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "[\r][\n]" 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-56 [ACTIVE] Request ready 03:38:55.184 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:r]: 94 bytes read 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "content-length: 455[\r][\n]" 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "[\r][\n]" 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 << HTTP/1.1 404 Not Found 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 << content-type: application/json; charset=UTF-8 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 << content-length: 455 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-56 [ACTIVE] Response received 03:38:55.187 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 113] Response received HTTP/1.1 404 Not Found 03:38:55.188 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 113] Connection can be kept alive indefinitely 03:38:55.188 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 113] Response processed 03:38:55.188 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 113] releasing connection 03:38:55.188 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:55.188 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-56][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:55.188 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-56][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:55.188 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:55.188 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-56][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:55.188 [main] DEBUG org.opensearch.client.RestClient -- request [HEAD http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Index executions_loop_index already exists. Delete index executions_loop_index 03:38:55.188 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 114] start execution 03:38:55.189 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:55.189 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:55.189 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 114] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:55.189 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:55.189 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:55.189 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-56][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:55.189 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 114] Connection allocated: CPoolProxy***http-outgoing-56 [ACTIVE]*** 03:38:55.189 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 03:38:55.189 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:r]: Event set [w] 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-56 [ACTIVE] Request ready 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 114] Attempt 1 to execute request 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 114] Target auth state: UNCHALLENGED 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 114] Proxy auth state: UNCHALLENGED 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:w]: Set timeout 30000 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> DELETE /executions_loop_index HTTP/1.1 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> Content-Length: 0 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> Connection: Keep-Alive 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 >> Authorization: Basic Og== 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:w]: Event set [w] 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 114] Request completed 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][rw:w]: 236 bytes written 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "DELETE /executions_loop_index HTTP/1.1[\r][\n]" 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "Content-Length: 0[\r][\n]" 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "Connection: Keep-Alive[\r][\n]" 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "Authorization: Basic Og==[\r][\n]" 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 >> "[\r][\n]" 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-56 [ACTIVE] Request ready 03:38:55.190 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:55.194 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:r]: 549 bytes read 03:38:55.194 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:55.194 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:55.194 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "content-length: 455[\r][\n]" 03:38:55.194 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "[\r][\n]" 03:38:55.194 [I/O dispatcher 57] DEBUG org.apache.http.wire -- http-outgoing-56 << "***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***,"status":404***" 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 << HTTP/1.1 404 Not Found 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 << content-type: application/json; charset=UTF-8 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.headers -- http-outgoing-56 << content-length: 455 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-56 [ACTIVE(455)] Response received 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 114] Response received HTTP/1.1 404 Not Found 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-56 [ACTIVE(455)] Input ready 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 114] Consume content 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 114] Connection can be kept alive indefinitely 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 114] Response processed 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 114] releasing connection 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-56][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-56][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-56 172.31.3.57:58402<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-56][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:55.195 [I/O dispatcher 57] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-56 [ACTIVE] [content length: 455; pos: 455; completed: true] 03:38:55.195 [main] DEBUG org.opensearch.client.RestClient -- request [DELETE http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Execution loop failed: method [DELETE], host [http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200], URI [/executions_loop_index], status line [HTTP/1.1 404 Not Found] ***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***,"status":404*** [ 56s ] executions total: 57 successful: 0 failed: 57 disconnect: 1 03:38:56.222 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 115] start execution 03:38:56.222 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:56.223 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:56.223 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 115] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:56.223 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30] 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-57][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 0 of 30] 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 115] Connection allocated: CPoolProxy***http-outgoing-57 [ACTIVE]*** 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:]: Set attribute http.nio.exchange-handler 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 0 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-57 [ACTIVE]: Connected 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:]: Set attribute http.nio.http-exchange-state 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 115] Start connection routing 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 115] route completed 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 115] Connection route established 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 115] Attempt 1 to execute request 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 115] Target auth state: UNCHALLENGED 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 115] Proxy auth state: UNCHALLENGED 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 30000 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> HEAD /executions_loop_index HTTP/1.1 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> Connection: Keep-Alive 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> Authorization: Basic Og== 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 03:38:56.225 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 115] Request completed 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:w]: 215 bytes written 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "HEAD /executions_loop_index HTTP/1.1[\r][\n]" 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "Connection: Keep-Alive[\r][\n]" 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "Authorization: Basic Og==[\r][\n]" 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "[\r][\n]" 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-57 [ACTIVE] Request ready 03:38:56.226 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:56.229 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:r]: 94 bytes read 03:38:56.229 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:56.229 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:56.229 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "content-length: 455[\r][\n]" 03:38:56.229 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "[\r][\n]" 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 << HTTP/1.1 404 Not Found 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 << content-type: application/json; charset=UTF-8 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 << content-length: 455 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-57 [ACTIVE] Response received 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 115] Response received HTTP/1.1 404 Not Found 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 115] Connection can be kept alive indefinitely 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 115] Response processed 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 115] releasing connection 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-57][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-57][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:56.230 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-57][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:56.230 [main] DEBUG org.opensearch.client.RestClient -- request [HEAD http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Index executions_loop_index already exists. Delete index executions_loop_index 03:38:56.230 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 116] start execution 03:38:56.231 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:56.231 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:56.231 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 116] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:56.231 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:56.231 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:56.231 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-57][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:56.231 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 116] Connection allocated: CPoolProxy***http-outgoing-57 [ACTIVE]*** 03:38:56.231 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 03:38:56.231 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:r]: Event set [w] 03:38:56.231 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-57 [ACTIVE] Request ready 03:38:56.231 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 116] Attempt 1 to execute request 03:38:56.231 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 116] Target auth state: UNCHALLENGED 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 116] Proxy auth state: UNCHALLENGED 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:w]: Set timeout 30000 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> DELETE /executions_loop_index HTTP/1.1 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> Content-Length: 0 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> Connection: Keep-Alive 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 >> Authorization: Basic Og== 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:w]: Event set [w] 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 116] Request completed 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][rw:w]: 236 bytes written 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "DELETE /executions_loop_index HTTP/1.1[\r][\n]" 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "Content-Length: 0[\r][\n]" 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "Connection: Keep-Alive[\r][\n]" 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "Authorization: Basic Og==[\r][\n]" 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 >> "[\r][\n]" 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-57 [ACTIVE] Request ready 03:38:56.232 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:r]: 549 bytes read 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "content-length: 455[\r][\n]" 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "[\r][\n]" 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.wire -- http-outgoing-57 << "***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***,"status":404***" 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 << HTTP/1.1 404 Not Found 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 << content-type: application/json; charset=UTF-8 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.headers -- http-outgoing-57 << content-length: 455 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-57 [ACTIVE(455)] Response received 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 116] Response received HTTP/1.1 404 Not Found 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-57 [ACTIVE(455)] Input ready 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 116] Consume content 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 116] Connection can be kept alive indefinitely 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 116] Response processed 03:38:56.236 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 116] releasing connection 03:38:56.237 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:56.237 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-57][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:56.237 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-57][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:56.237 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-57 172.31.3.57:58418<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:56.237 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-57][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:56.237 [I/O dispatcher 58] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-57 [ACTIVE] [content length: 455; pos: 455; completed: true] 03:38:56.237 [main] DEBUG org.opensearch.client.RestClient -- request [DELETE http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Execution loop failed: method [DELETE], host [http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200], URI [/executions_loop_index], status line [HTTP/1.1 404 Not Found] ***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","index_uuid":"_na_","resource.type":"index_or_alias","resource.id":"executions_loop_index","index":"executions_loop_index"***,"status":404*** [ 57s ] executions total: 58 successful: 0 failed: 58 disconnect: 1 03:38:57.243 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 117] start execution 03:38:57.243 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:57.244 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:57.244 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 117] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:57.244 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30] 03:38:57.247 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-58][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 0 of 30] 03:38:57.247 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 117] Connection allocated: CPoolProxy***http-outgoing-58 [ACTIVE]*** 03:38:57.247 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:]: Set attribute http.nio.exchange-handler 03:38:57.247 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 03:38:57.247 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 0 03:38:57.247 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-58 [ACTIVE]: Connected 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:]: Set attribute http.nio.http-exchange-state 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 117] Start connection routing 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 117] route completed 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 117] Connection route established 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 117] Attempt 1 to execute request 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 117] Target auth state: UNCHALLENGED 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 117] Proxy auth state: UNCHALLENGED 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 30000 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> HEAD /executions_loop_index HTTP/1.1 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> Connection: Keep-Alive 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> Authorization: Basic Og== 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 117] Request completed 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:w]: 215 bytes written 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "HEAD /executions_loop_index HTTP/1.1[\r][\n]" 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "Connection: Keep-Alive[\r][\n]" 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "Authorization: Basic Og==[\r][\n]" 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "[\r][\n]" 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-58 [ACTIVE] Request ready 03:38:57.248 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:r]: 94 bytes read 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "content-length: 455[\r][\n]" 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "[\r][\n]" 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 << HTTP/1.1 404 Not Found 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 << content-type: application/json; charset=UTF-8 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 << content-length: 455 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-58 [ACTIVE] Response received 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 117] Response received HTTP/1.1 404 Not Found 03:38:57.252 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 117] Connection can be kept alive indefinitely 03:38:57.253 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 117] Response processed 03:38:57.253 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 117] releasing connection 03:38:57.253 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:57.253 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-58][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:57.253 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-58][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:57.253 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:57.253 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-58][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:57.253 [main] DEBUG org.opensearch.client.RestClient -- request [HEAD http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Index executions_loop_index already exists. Delete index executions_loop_index 03:38:57.253 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 118] start execution 03:38:57.254 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:57.254 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:57.254 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 118] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:57.254 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:57.254 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:57.254 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-58][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:57.254 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 118] Connection allocated: CPoolProxy***http-outgoing-58 [ACTIVE]*** 03:38:57.254 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 03:38:57.254 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:r]: Event set [w] 03:38:57.254 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-58 [ACTIVE] Request ready 03:38:57.254 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 118] Attempt 1 to execute request 03:38:57.254 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 118] Target auth state: UNCHALLENGED 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 118] Proxy auth state: UNCHALLENGED 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:w]: Set timeout 30000 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> DELETE /executions_loop_index HTTP/1.1 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> Content-Length: 0 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> Connection: Keep-Alive 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 >> Authorization: Basic Og== 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:w]: Event set [w] 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 118] Request completed 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][rw:w]: 236 bytes written 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "DELETE /executions_loop_index HTTP/1.1[\r][\n]" 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "Content-Length: 0[\r][\n]" 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "Connection: Keep-Alive[\r][\n]" 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "Authorization: Basic Og==[\r][\n]" 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 >> "[\r][\n]" 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-58 [ACTIVE] Request ready 03:38:57.255 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:r]: 549 bytes read 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "content-length: 455[\r][\n]" 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "[\r][\n]" 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.wire -- http-outgoing-58 << "***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***,"status":404***" 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 << HTTP/1.1 404 Not Found 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 << content-type: application/json; charset=UTF-8 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.headers -- http-outgoing-58 << content-length: 455 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-58 [ACTIVE(455)] Response received 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 118] Response received HTTP/1.1 404 Not Found 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-58 [ACTIVE(455)] Input ready 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 118] Consume content 03:38:57.258 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 118] Connection can be kept alive indefinitely 03:38:57.259 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 118] Response processed 03:38:57.259 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 118] releasing connection 03:38:57.259 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:57.259 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-58][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:57.259 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-58][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:57.259 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-58 172.31.3.57:58428<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:57.259 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-58][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:57.259 [I/O dispatcher 59] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-58 [ACTIVE] [content length: 455; pos: 455; completed: true] 03:38:57.259 [main] DEBUG org.opensearch.client.RestClient -- request [DELETE http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Execution loop failed: method [DELETE], host [http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200], URI [/executions_loop_index], status line [HTTP/1.1 404 Not Found] ***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***,"status":404*** [ 58s ] executions total: 59 successful: 0 failed: 59 disconnect: 1 03:38:58.268 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] start execution 03:38:58.270 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:58.272 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.272 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 119] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.272 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30] 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 0 of 30] 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 119] Connection allocated: CPoolProxy***http-outgoing-59 [ACTIVE]*** 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:]: Set attribute http.nio.exchange-handler 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 0 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE]: Connected 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:]: Set attribute http.nio.http-exchange-state 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 119] Start connection routing 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 119] route completed 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Connection route established 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Attempt 1 to execute request 03:38:58.281 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Target auth state: UNCHALLENGED 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Proxy auth state: UNCHALLENGED 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 30000 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> HEAD /executions_loop_index HTTP/1.1 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Connection: Keep-Alive 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Authorization: Basic Og== 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Request completed 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:w]: 215 bytes written 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "HEAD /executions_loop_index HTTP/1.1[\r][\n]" 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Connection: Keep-Alive[\r][\n]" 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Authorization: Basic Og==[\r][\n]" 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "[\r][\n]" 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] Request ready 03:38:58.282 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: 94 bytes read 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "content-length: 455[\r][\n]" 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "[\r][\n]" 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << HTTP/1.1 404 Not Found 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << content-type: application/json; charset=UTF-8 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << content-length: 455 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] Response received 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Response received HTTP/1.1 404 Not Found 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 119] Connection can be kept alive indefinitely 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Response processed 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 119] releasing connection 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.285 [main] DEBUG org.opensearch.client.RestClient -- request [HEAD http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Index executions_loop_index already exists. Delete index executions_loop_index 03:38:58.285 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] start execution 03:38:58.285 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:58.286 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.286 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 120] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.286 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.286 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:58.288 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.288 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 120] Connection allocated: CPoolProxy***http-outgoing-59 [ACTIVE]*** 03:38:58.288 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 03:38:58.288 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:r]: Event set [w] 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] Request ready 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Attempt 1 to execute request 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Target auth state: UNCHALLENGED 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Proxy auth state: UNCHALLENGED 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:w]: Set timeout 30000 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> DELETE /executions_loop_index HTTP/1.1 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Content-Length: 0 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Connection: Keep-Alive 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Authorization: Basic Og== 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:w]: Event set [w] 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Request completed 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:w]: 236 bytes written 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "DELETE /executions_loop_index HTTP/1.1[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Content-Length: 0[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Connection: Keep-Alive[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Authorization: Basic Og==[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] Request ready 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: 549 bytes read 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "content-length: 455[\r][\n]" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "[\r][\n]" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***,"status":404***" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << HTTP/1.1 404 Not Found 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << content-type: application/json; charset=UTF-8 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << content-length: 455 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE(455)] Response received 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Response received HTTP/1.1 404 Not Found 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE(455)] Input ready 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Consume content 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 120] Connection can be kept alive indefinitely 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Response processed 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 120] releasing connection 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] [content length: 455; pos: 455; completed: true] 03:38:58.294 [main] DEBUG org.opensearch.client.RestClient -- request [DELETE http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Execution loop failed: method [DELETE], host [http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200], URI [/executions_loop_index], status line [HTTP/1.1 404 Not Found] ***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***,"status":404*** [ 60s ] executions total: 60 successful: 0 failed: 60 disconnect: 1 03:38:58.295 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager is shutting down 03:38:58.296 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.3.57:34964<->10.100.32.83:9200[ACTIVE][r:r]: Close 03:38:58.298 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [CLOSED]: Disconnected 03:38:58.305 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager shut down Test Result: Total Executions: 60 Successful Executions: 0 Failed Executions: 60 Disconnection Counts: 1 Connection Information: Database Type: elasticsearch7 Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local Port: 9200 Database: Table: User: Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds ------------------------------------------------------------------------------------------------------------------ check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Running May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "content-length: 455[\r][\n]" 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "[\r][\n]" 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << HTTP/1.1 404 Not Found 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << content-type: application/json; charset=UTF-8 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << content-length: 455 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] Response received 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Response received HTTP/1.1 404 Not Found 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 119] Connection can be kept alive indefinitely 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 119] Response processed 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 119] releasing connection 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:58.284 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.285 [main] DEBUG org.opensearch.client.RestClient -- request [HEAD http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Index executions_loop_index already exists. Delete index executions_loop_index 03:38:58.285 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] start execution 03:38:58.285 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 03:38:58.286 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.286 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 120] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.286 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.286 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:58.288 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.288 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 120] Connection allocated: CPoolProxy***http-outgoing-59 [ACTIVE]*** 03:38:58.288 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Set attribute http.nio.exchange-handler 03:38:58.288 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:r]: Event set [w] 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] Request ready 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Attempt 1 to execute request 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Target auth state: UNCHALLENGED 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Proxy auth state: UNCHALLENGED 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:w]: Set timeout 30000 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> DELETE /executions_loop_index HTTP/1.1 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Content-Length: 0 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Connection: Keep-Alive 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 >> Authorization: Basic Og== 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:w]: Event set [w] 03:38:58.289 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Request completed 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][rw:w]: 236 bytes written 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "DELETE /executions_loop_index HTTP/1.1[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Content-Length: 0[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Connection: Keep-Alive[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "Authorization: Basic Og==[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 >> "[\r][\n]" 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] Request ready 03:38:58.290 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: 549 bytes read 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "HTTP/1.1 404 Not Found[\r][\n]" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "content-type: application/json; charset=UTF-8[\r][\n]" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "content-length: 455[\r][\n]" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "[\r][\n]" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.wire -- http-outgoing-59 << "***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***,"status":404***" 03:38:58.291 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << HTTP/1.1 404 Not Found 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << content-type: application/json; charset=UTF-8 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.headers -- http-outgoing-59 << content-length: 455 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE(455)] Response received 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Response received HTTP/1.1 404 Not Found 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE(455)] Input ready 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Consume content 03:38:58.292 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 120] Connection can be kept alive indefinitely 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 120] Response processed 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 120] releasing connection 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-59 172.31.3.57:42512<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-59][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 03:38:58.293 [I/O dispatcher 60] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-59 [ACTIVE] [content length: 455; pos: 455; completed: true] 03:38:58.294 [main] DEBUG org.opensearch.client.RestClient -- request [DELETE http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index] returned [HTTP/1.1 404 Not Found] Execution loop failed: method [DELETE], host [http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200], URI [/executions_loop_index], status line [HTTP/1.1 404 Not Found] ***"error":***"root_cause":[***"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***],"type":"index_not_found_exception","reason":"no such index [executions_loop_index]","resource.type":"index_or_alias","resource.id":"executions_loop_index","index_uuid":"_na_","index":"executions_loop_index"***,"status":404*** [ 60s ] executions total: 60 successful: 0 failed: 60 disconnect: 1 03:38:58.295 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager is shutting down 03:38:58.296 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-0 172.31.3.57:34964<->10.100.32.83:9200[ACTIVE][r:r]: Close 03:38:58.298 [I/O dispatcher 1] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-0 [CLOSED]: Disconnected 03:38:58.305 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager shut down Test Result: Total Executions: 60 Successful Executions: 0 Failed Executions: 60 Disconnection Counts: 1 Connection Information: Database Type: elasticsearch7 Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local Port: 9200 Database: Table: User: Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 0 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-elastics-yurtvt --namespace ns-bpqhv ` pod/test-db-client-executionloop-elastics-yurtvt patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-elastics-yurtvt" force deleted No resources found in ns-bpqhv namespace. `echo "curl -X POST 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/boss/_doc/1?pretty' -H 'Content-Type: application/json' -d '***\"datainsert\":\"vaehx\"***'" | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` Defaulted container "elasticsearch" out of: elasticsearch, exporter, kbagent, prepare-plugins (init), install-plugins (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 22 0 0 100 22 0 21 0:00:01 0:00:01 --:--:-- 21 100 22 0 0 100 22 0 10 0:00:02 0:00:02 --:--:-- 10 *** 100 240 100 218 100 22 74 7 0:00:03 0:00:02 0:00:01 74 100 240 100 218 100 22 74 7 0:00:03 0:00:02 0:00:01 74 "_index" : "boss", "_type" : "_doc", "_id" : "1", "_version" : 1, "result" : "created", "_shards" : *** "total" : 2, "successful" : 1, "failed" : 0 ***, "_seq_no" : 0, "_primary_term" : 1 *** add consistent data vaehx Success `kubectl get pvc -l app.kubernetes.io/instance=elastics-yurtvt,apps.kubeblocks.io/component-name=master,apps.kubeblocks.io/vct-name=data --namespace ns-bpqhv ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-yurtvt namespace. `kbcli cluster volume-expand elastics-yurtvt --auto-approve --force=true --components master --volume-claim-templates data --storage 24Gi --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-volumeexpansion-2dsls created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-volumeexpansion-2dsls -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-volumeexpansion-2dsls ns-bpqhv VolumeExpansion elastics-yurtvt master Running 0/3 May 28,2025 11:44 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done No resources found in elastics-yurtvt namespace. check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-volumeexpansion-2dsls ns-bpqhv VolumeExpansion elastics-yurtvt master Succeed 3/3 May 28,2025 11:44 UTC+0800 check ops status done ops_status:elastics-yurtvt-volumeexpansion-2dsls ns-bpqhv VolumeExpansion elastics-yurtvt master Succeed 3/3 May 28,2025 11:44 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-volumeexpansion-2dsls --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-volumeexpansion-2dsls patched `kbcli cluster delete-ops --name elastics-yurtvt-volumeexpansion-2dsls --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-volumeexpansion-2dsls deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success cmpv upgrade service version:3,7.7.1|3,7.8.1|3,7.10.1|3,8.1.3|3,8.8.2 set latest cmpv service version latest service version:7.10.1 cmpv service version upgrade and downgrade upgrade from:7.7.1 to service version:7.8.1 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: elastics-yurtvt-upgrade-cmpv- namespace: ns-bpqhv spec: clusterName: elastics-yurtvt upgrade: components: - componentName: master serviceVersion: 7.8.1 - componentName: data serviceVersion: 7.8.1 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_elastics-yurtvt.yaml` opsrequest.operations.kubeblocks.io/elastics-yurtvt-upgrade-cmpv-g89pp created create test_ops_cluster_elastics-yurtvt.yaml Success `rm -rf test_ops_cluster_elastics-yurtvt.yaml` check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-upgrade-cmpv-g89pp ns-bpqhv Upgrade elastics-yurtvt Creating -/- May 28,2025 11:46 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-upgrade-cmpv-g89pp ns-bpqhv Upgrade elastics-yurtvt data,master Succeed 6/6 May 28,2025 11:46 UTC+0800 check ops status done ops_status:elastics-yurtvt-upgrade-cmpv-g89pp ns-bpqhv Upgrade elastics-yurtvt data,master Succeed 6/6 May 28,2025 11:46 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-upgrade-cmpv-g89pp --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-upgrade-cmpv-g89pp patched `kbcli cluster delete-ops --name elastics-yurtvt-upgrade-cmpv-g89pp --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-upgrade-cmpv-g89pp deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success upgrade from:7.8.1 to service version:7.10.1 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: elastics-yurtvt-upgrade-cmpv- namespace: ns-bpqhv spec: clusterName: elastics-yurtvt upgrade: components: - componentName: master serviceVersion: 7.10.1 - componentName: data serviceVersion: 7.10.1 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_elastics-yurtvt.yaml` opsrequest.operations.kubeblocks.io/elastics-yurtvt-upgrade-cmpv-5b9zq created create test_ops_cluster_elastics-yurtvt.yaml Success `rm -rf test_ops_cluster_elastics-yurtvt.yaml` check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-upgrade-cmpv-5b9zq ns-bpqhv Upgrade elastics-yurtvt data,master Running 0/6 May 28,2025 11:52 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-upgrade-cmpv-5b9zq ns-bpqhv Upgrade elastics-yurtvt data,master Succeed 6/6 May 28,2025 11:52 UTC+0800 check ops status done ops_status:elastics-yurtvt-upgrade-cmpv-5b9zq ns-bpqhv Upgrade elastics-yurtvt data,master Succeed 6/6 May 28,2025 11:52 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-upgrade-cmpv-5b9zq --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-upgrade-cmpv-5b9zq patched `kbcli cluster delete-ops --name elastics-yurtvt-upgrade-cmpv-5b9zq --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-upgrade-cmpv-5b9zq deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-yurtvt,apps.kubeblocks.io/component-name=data --namespace ns-bpqhv | (grep "data" || true )` cluster data scale-out check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-yurtvt namespace. `kbcli cluster scale-out elastics-yurtvt --auto-approve --force=true --components data --replicas 1 --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-horizontalscaling-rd22c created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-horizontalscaling-rd22c -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-horizontalscaling-rd22c ns-bpqhv HorizontalScaling elastics-yurtvt data Running 0/1 May 28,2025 11:59 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-3 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 11:59 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done No resources found in elastics-yurtvt namespace. check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-horizontalscaling-rd22c ns-bpqhv HorizontalScaling elastics-yurtvt data Succeed 1/1 May 28,2025 11:59 UTC+0800 check ops status done ops_status:elastics-yurtvt-horizontalscaling-rd22c ns-bpqhv HorizontalScaling elastics-yurtvt data Succeed 1/1 May 28,2025 11:59 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-horizontalscaling-rd22c --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-horizontalscaling-rd22c patched `kbcli cluster delete-ops --name elastics-yurtvt-horizontalscaling-rd22c --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-horizontalscaling-rd22c deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-yurtvt,apps.kubeblocks.io/component-name=data --namespace ns-bpqhv | (grep "data" || true )` cluster data scale-in check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-yurtvt namespace. `kbcli cluster scale-in elastics-yurtvt --auto-approve --force=true --components data --replicas 1 --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-horizontalscaling-nrmms created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-horizontalscaling-nrmms -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-horizontalscaling-nrmms ns-bpqhv HorizontalScaling elastics-yurtvt data Running 0/1 May 28,2025 12:01 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 11:34 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done No resources found in elastics-yurtvt namespace. check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-horizontalscaling-nrmms ns-bpqhv HorizontalScaling elastics-yurtvt data Succeed 1/1 May 28,2025 12:01 UTC+0800 check ops status done ops_status:elastics-yurtvt-horizontalscaling-nrmms ns-bpqhv HorizontalScaling elastics-yurtvt data Succeed 1/1 May 28,2025 12:01 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-horizontalscaling-nrmms --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-horizontalscaling-nrmms patched `kbcli cluster delete-ops --name elastics-yurtvt-horizontalscaling-nrmms --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-horizontalscaling-nrmms deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-yurtvt,apps.kubeblocks.io/component-name=data --namespace ns-bpqhv | (grep "data" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-yurtvt --auto-approve --force=true --components data --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-restart-8jfng created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-restart-8jfng -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-restart-8jfng ns-bpqhv Restart elastics-yurtvt data Running 0/3 May 28,2025 12:02 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:07 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:04 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:03 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 11:34 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 11:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-restart-8jfng ns-bpqhv Restart elastics-yurtvt data Succeed 3/3 May 28,2025 12:02 UTC+0800 check ops status done ops_status:elastics-yurtvt-restart-8jfng ns-bpqhv Restart elastics-yurtvt data Succeed 3/3 May 28,2025 12:02 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-restart-8jfng --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-restart-8jfng patched `kbcli cluster delete-ops --name elastics-yurtvt-restart-8jfng --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-restart-8jfng deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success cluster hscale offline instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: elastics-yurtvt-hscaleoffinstance- labels: app.kubernetes.io/instance: elastics-yurtvt app.kubernetes.io/managed-by: kubeblocks namespace: ns-bpqhv spec: type: HorizontalScaling clusterName: elastics-yurtvt force: true horizontalScaling: - componentName: master scaleIn: onlineInstancesToOffline: - elastics-yurtvt-master-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_elastics-yurtvt.yaml` opsrequest.operations.kubeblocks.io/elastics-yurtvt-hscaleoffinstance-fwhh7 created create test_ops_cluster_elastics-yurtvt.yaml Success `rm -rf test_ops_cluster_elastics-yurtvt.yaml` check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-hscaleoffinstance-fwhh7 ns-bpqhv HorizontalScaling elastics-yurtvt master Running 0/1 May 28,2025 12:09 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:07 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:04 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:09 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 12:11 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:09 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-1 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-hscaleoffinstance-fwhh7 ns-bpqhv HorizontalScaling elastics-yurtvt master Succeed 1/1 May 28,2025 12:09 UTC+0800 check ops status done ops_status:elastics-yurtvt-hscaleoffinstance-fwhh7 ns-bpqhv HorizontalScaling elastics-yurtvt master Succeed 1/1 May 28,2025 12:09 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-hscaleoffinstance-fwhh7 --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-hscaleoffinstance-fwhh7 patched `kbcli cluster delete-ops --name elastics-yurtvt-hscaleoffinstance-fwhh7 --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-hscaleoffinstance-fwhh7 deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-1 --namespace ns-bpqhv -- sh` check db_client batch data Success cluster hscale online instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: elastics-yurtvt-hscaleoninstance- labels: app.kubernetes.io/instance: elastics-yurtvt app.kubernetes.io/managed-by: kubeblocks namespace: ns-bpqhv spec: type: HorizontalScaling clusterName: elastics-yurtvt force: true horizontalScaling: - componentName: master scaleOut: offlineInstancesToOnline: - elastics-yurtvt-master-0 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_elastics-yurtvt.yaml` opsrequest.operations.kubeblocks.io/elastics-yurtvt-hscaleoninstance-c8chp created create test_ops_cluster_elastics-yurtvt.yaml Success `rm -rf test_ops_cluster_elastics-yurtvt.yaml` check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-hscaleoninstance-c8chp ns-bpqhv HorizontalScaling elastics-yurtvt master Running 0/1 May 28,2025 12:13 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:07 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:04 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:13 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 12:13 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:17 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:15 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-hscaleoninstance-c8chp ns-bpqhv HorizontalScaling elastics-yurtvt master Succeed 1/1 May 28,2025 12:13 UTC+0800 check ops status done ops_status:elastics-yurtvt-hscaleoninstance-c8chp ns-bpqhv HorizontalScaling elastics-yurtvt master Succeed 1/1 May 28,2025 12:13 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-hscaleoninstance-c8chp --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-hscaleoninstance-c8chp patched `kbcli cluster delete-ops --name elastics-yurtvt-hscaleoninstance-c8chp --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-hscaleoninstance-c8chp deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop elastics-yurtvt --auto-approve --force=true --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-stop-gb9jm created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-stop-gb9jm -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-stop-gb9jm ns-bpqhv Stop elastics-yurtvt data,master Running 0/6 May 28,2025 12:19 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Stopping May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-stop-gb9jm ns-bpqhv Stop elastics-yurtvt data,master Succeed 6/6 May 28,2025 12:19 UTC+0800 check ops status done ops_status:elastics-yurtvt-stop-gb9jm ns-bpqhv Stop elastics-yurtvt data,master Succeed 6/6 May 28,2025 12:19 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-stop-gb9jm --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-stop-gb9jm patched `kbcli cluster delete-ops --name elastics-yurtvt-stop-gb9jm --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-stop-gb9jm deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start elastics-yurtvt --force=true --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-start-pwk5k created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-start-pwk5k -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-start-pwk5k ns-bpqhv Start elastics-yurtvt data,master Running 0/6 May 28,2025 12:21 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:21 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-start-pwk5k ns-bpqhv Start elastics-yurtvt data,master Succeed 6/6 May 28,2025 12:21 UTC+0800 check ops status done ops_status:elastics-yurtvt-start-pwk5k ns-bpqhv Start elastics-yurtvt data,master Succeed 6/6 May 28,2025 12:21 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-start-pwk5k --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-start-pwk5k patched `kbcli cluster delete-ops --name elastics-yurtvt-start-pwk5k --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-start-pwk5k deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success test failover connectionstress check node drain check node drain success Error from server (NotFound): pods "test-db-client-connectionstress-elastics-yurtvt" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-elastics-yurtvt --namespace ns-bpqhv ` Error from server (NotFound): pods "test-db-client-connectionstress-elastics-yurtvt" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-elastics-yurtvt" not found `kubectl get secrets -l app.kubernetes.io/instance=elastics-yurtvt` No resources found in ns-bpqhv namespace. Not found cluster secret DB_USERNAME:;DB_PASSWORD:;DB_PORT:9200;DB_DATABASE:elastic apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-elastics-yurtvt namespace: ns-bpqhv spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local" - "--user" - "" - "--password" - "" - "--port" - "9200" - "--database" - "elastic" - "--dbtype" - "elasticsearch7" - "--test" - "connectionstress" - "--connections" - "1024" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-elastics-yurtvt.yaml` pod/test-db-client-connectionstress-elastics-yurtvt created apply test-db-client-connectionstress-elastics-yurtvt.yaml Success `rm -rf test-db-client-connectionstress-elastics-yurtvt.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-elastics-yurtvt 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-elastics-yurtvt 1/1 Running 0 11s check pod test-db-client-connectionstress-elastics-yurtvt status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-elastics-yurtvt 0/1 Completed 0 16s check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Running May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:20Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:21 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done 04:23:34.602 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager is shutting down 04:23:34.602 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1022 172.31.3.198:48960<->10.100.32.83:9200[ACTIVE][r:r]: Close 04:23:34.602 [I/O dispatcher 1024] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1022 [CLOSED]: Disconnected 04:23:34.603 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager shut down 04:23:34.606 [main] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] start execution 04:23:34.606 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -- CookieSpec selected: default 04:23:34.606 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- Re-using cached 'basic' auth scheme for http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 04:23:34.606 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -- No credentials for preemptive authentication 04:23:34.606 [main] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] Request connection for ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 04:23:34.606 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection request: [route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 0 of 10; total allocated: 0 of 30] 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection leased: [id: http-outgoing-1023][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 0 of 30] 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] Connection allocated: CPoolProxy***http-outgoing-1023 [ACTIVE]*** 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][r:]: Set attribute http.nio.exchange-handler 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 0 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE]: Connected 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][rw:]: Set attribute http.nio.http-exchange-state 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] Start connection routing 04:23:34.607 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] route completed 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Connection route established 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Attempt 1 to execute request 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Target auth state: UNCHALLENGED 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Proxy auth state: UNCHALLENGED 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][rw:]: Set timeout 30000 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> GET /_cluster/health?pretty=true HTTP/1.1 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> Content-Length: 0 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> Connection: Keep-Alive 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 >> User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16) 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][rw:]: Event set [w] 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Request completed 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][rw:w]: 212 bytes written 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "GET /_cluster/health?pretty=true HTTP/1.1[\r][\n]" 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "Content-Length: 0[\r][\n]" 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200[\r][\n]" 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "Connection: Keep-Alive[\r][\n]" 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "User-Agent: Apache-HttpAsyncClient/4.1.5 (Java/11.0.16)[\r][\n]" 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 >> "[\r][\n]" 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE] Request ready 04:23:34.608 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][r:w]: Event cleared [w] 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][r:r]: 548 bytes read 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "HTTP/1.1 200 OK[\r][\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "content-type: application/json; charset=UTF-8[\r][\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "content-length: 461[\r][\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "[\r][\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "***[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "cluster_name" : "ns-bpqhv",[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "status" : "green",[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "timed_out" : false,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "number_of_nodes" : 6,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "number_of_data_nodes" : 6,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "active_primary_shards" : 1,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "active_shards" : 2,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "relocating_shards" : 0,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "initializing_shards" : 0,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "unassigned_shards" : 0,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "delayed_unassigned_shards" : 0,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "number_of_pending_tasks" : 0,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "number_of_in_flight_fetch" : 0,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "task_max_waiting_in_queue_millis" : 0,[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << " "active_shards_percent_as_number" : 100.0[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.wire -- http-outgoing-1023 << "***[\n]" 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 << HTTP/1.1 200 OK 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 << content-type: application/json; charset=UTF-8 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.headers -- http-outgoing-1023 << content-length: 461 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE(461)] Response received 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Response received HTTP/1.1 200 OK 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE(461)] Input ready 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Consume content 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] Connection can be kept alive indefinitely 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.MainClientExec -- [exchange: 1024] Response processed 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalHttpAsyncClient -- [exchange: 1024] releasing connection 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Releasing connection: [id: http-outgoing-1023][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30] 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection [id: http-outgoing-1023][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200] can be kept alive indefinitely 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][r:r]: Set timeout 0 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection released: [id: http-outgoing-1023][route: ***->http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30] 04:23:34.611 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [ACTIVE] [content length: 461; pos: 461; completed: true] 04:23:34.611 [main] DEBUG org.opensearch.client.RestClient -- request [GET http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health?pretty=true] returned [HTTP/1.1 200 OK] 04:23:34.611 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager is shutting down 04:23:34.612 [main] DEBUG org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl -- http-outgoing-1023 172.31.3.198:48970<->10.100.32.83:9200[ACTIVE][r:r]: Close 04:23:34.612 [I/O dispatcher 1025] DEBUG org.apache.http.impl.nio.client.InternalIODispatch -- http-outgoing-1023 [CLOSED]: Disconnected 04:23:34.612 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager shut down 04:23:34.612 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager is shutting down 04:23:34.613 [main] DEBUG org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager -- Connection manager shut down Test Result: Created 1024 connections Connection Information: Database Type: elasticsearch7 Host: elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local Port: 9200 Database: elastic Table: User: Org: Access Mode: mysql Test Type: connectionstress Connection Count: 1024 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-elastics-yurtvt --namespace ns-bpqhv ` pod/test-db-client-connectionstress-elastics-yurtvt patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-elastics-yurtvt" force deleted check failover pod name failover pod name:elastics-yurtvt-master-0 failover connectionstress Success No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-yurtvt,apps.kubeblocks.io/component-name=data --namespace ns-bpqhv | (grep "data" || true )` `kubectl get pvc -l app.kubernetes.io/instance=elastics-yurtvt,apps.kubeblocks.io/component-name=data,apps.kubeblocks.io/vct-name=data --namespace ns-bpqhv ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-yurtvt namespace. `kbcli cluster volume-expand elastics-yurtvt --auto-approve --force=true --components data --volume-claim-templates data --storage 22Gi --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-volumeexpansion-v7wsp created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-volumeexpansion-v7wsp -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-volumeexpansion-v7wsp ns-bpqhv VolumeExpansion elastics-yurtvt data Running 0/3 May 28,2025 12:24 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:21 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:21 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done No resources found in elastics-yurtvt namespace. check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-volumeexpansion-v7wsp ns-bpqhv VolumeExpansion elastics-yurtvt data Succeed 3/3 May 28,2025 12:24 UTC+0800 check ops status done ops_status:elastics-yurtvt-volumeexpansion-v7wsp ns-bpqhv VolumeExpansion elastics-yurtvt data Succeed 3/3 May 28,2025 12:24 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-volumeexpansion-v7wsp --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-volumeexpansion-v7wsp patched `kbcli cluster delete-ops --name elastics-yurtvt-volumeexpansion-v7wsp --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-volumeexpansion-v7wsp deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success cluster master scale-out check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-yurtvt namespace. `kbcli cluster scale-out elastics-yurtvt --auto-approve --force=true --components master --replicas 1 --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-horizontalscaling-8pd5l created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-horizontalscaling-8pd5l -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-horizontalscaling-8pd5l ns-bpqhv HorizontalScaling elastics-yurtvt master Running 0/1 May 28,2025 12:25 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:26 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 12:31 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:29 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:27 UTC+0800 elastics-yurtvt-master-3 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 12:25 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done No resources found in elastics-yurtvt namespace. check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-horizontalscaling-8pd5l ns-bpqhv HorizontalScaling elastics-yurtvt master Succeed 1/1 May 28,2025 12:25 UTC+0800 check ops status done ops_status:elastics-yurtvt-horizontalscaling-8pd5l ns-bpqhv HorizontalScaling elastics-yurtvt master Succeed 1/1 May 28,2025 12:25 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-horizontalscaling-8pd5l --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-horizontalscaling-8pd5l patched `kbcli cluster delete-ops --name elastics-yurtvt-horizontalscaling-8pd5l --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-horizontalscaling-8pd5l deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success cluster master scale-in check cluster status before ops check cluster status done cluster_status:Running No resources found in elastics-yurtvt namespace. `kbcli cluster scale-in elastics-yurtvt --auto-approve --force=true --components master --replicas 1 --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-horizontalscaling-b4kz9 created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-horizontalscaling-b4kz9 -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-horizontalscaling-b4kz9 ns-bpqhv HorizontalScaling elastics-yurtvt master Running 0/1 May 28,2025 12:33 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:21 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 500m / 500m 2Gi / 2Gi data:22Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:33 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 12:37 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:36 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done No resources found in elastics-yurtvt namespace. check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-horizontalscaling-b4kz9 ns-bpqhv HorizontalScaling elastics-yurtvt master Succeed 1/1 May 28,2025 12:33 UTC+0800 check ops status done ops_status:elastics-yurtvt-horizontalscaling-b4kz9 ns-bpqhv HorizontalScaling elastics-yurtvt master Succeed 1/1 May 28,2025 12:33 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-horizontalscaling-b4kz9 --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-horizontalscaling-b4kz9 patched `kbcli cluster delete-ops --name elastics-yurtvt-horizontalscaling-b4kz9 --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-horizontalscaling-b4kz9 deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success check component data exists `kubectl get components -l app.kubernetes.io/instance=elastics-yurtvt,apps.kubeblocks.io/component-name=data --namespace ns-bpqhv | (grep "data" || true )` check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale elastics-yurtvt --auto-approve --force=true --components data --cpu 600m --memory 2.1Gi --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-verticalscaling-cx7xp created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-verticalscaling-cx7xp -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-verticalscaling-cx7xp ns-bpqhv VerticalScaling elastics-yurtvt data Running 0/3 May 28,2025 12:40 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:44 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:42 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:40 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 12:37 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:36 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:34 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-verticalscaling-cx7xp ns-bpqhv VerticalScaling elastics-yurtvt data Succeed 3/3 May 28,2025 12:40 UTC+0800 check ops status done ops_status:elastics-yurtvt-verticalscaling-cx7xp ns-bpqhv VerticalScaling elastics-yurtvt data Succeed 3/3 May 28,2025 12:40 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-verticalscaling-cx7xp --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-verticalscaling-cx7xp patched `kbcli cluster delete-ops --name elastics-yurtvt-verticalscaling-cx7xp --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-verticalscaling-cx7xp deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-yurtvt --auto-approve --force=true --components master --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-restart-8t8hg created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-restart-8t8hg -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-restart-8t8hg ns-bpqhv Restart elastics-yurtvt master Running 0/3 May 28,2025 12:46 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:44 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:42 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:40 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-0-40.us-west-2.compute.internal/172.31.0.40 May 28,2025 12:50 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:48 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:46 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-restart-8t8hg ns-bpqhv Restart elastics-yurtvt master Succeed 3/3 May 28,2025 12:46 UTC+0800 check ops status done ops_status:elastics-yurtvt-restart-8t8hg ns-bpqhv Restart elastics-yurtvt master Succeed 3/3 May 28,2025 12:46 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-restart-8t8hg --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-restart-8t8hg patched `kbcli cluster delete-ops --name elastics-yurtvt-restart-8t8hg --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-restart-8t8hg deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart elastics-yurtvt --auto-approve --force=true --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-restart-tt4xq created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-restart-tt4xq -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-restart-tt4xq ns-bpqhv Restart elastics-yurtvt master,data Running 0/6 May 28,2025 12:52 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:56 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:54 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:52 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 12:57 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 12:54 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 500m / 500m 2Gi / 2Gi data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:52 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-restart-tt4xq ns-bpqhv Restart elastics-yurtvt master,data Succeed 6/6 May 28,2025 12:52 UTC+0800 check ops status done ops_status:elastics-yurtvt-restart-tt4xq ns-bpqhv Restart elastics-yurtvt master,data Succeed 6/6 May 28,2025 12:52 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-restart-tt4xq --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-restart-tt4xq patched `kbcli cluster delete-ops --name elastics-yurtvt-restart-tt4xq --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-restart-tt4xq deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale elastics-yurtvt --auto-approve --force=true --components master --cpu 600m --memory 2.1Gi --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-verticalscaling-ptpn8 created successfully, you can view the progress: kbcli cluster describe-ops elastics-yurtvt-verticalscaling-ptpn8 -n ns-bpqhv check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-verticalscaling-ptpn8 ns-bpqhv VerticalScaling elastics-yurtvt master Running 0/3 May 28,2025 12:59 UTC+0800 check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Updating May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:56 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:54 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:52 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 13:03 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 13:01 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:59 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done check ops status `kbcli cluster list-ops elastics-yurtvt --status all --namespace ns-bpqhv ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME elastics-yurtvt-verticalscaling-ptpn8 ns-bpqhv VerticalScaling elastics-yurtvt master Succeed 3/3 May 28,2025 12:59 UTC+0800 check ops status done ops_status:elastics-yurtvt-verticalscaling-ptpn8 ns-bpqhv VerticalScaling elastics-yurtvt master Succeed 3/3 May 28,2025 12:59 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations elastics-yurtvt-verticalscaling-ptpn8 --namespace ns-bpqhv ` opsrequest.operations.kubeblocks.io/elastics-yurtvt-verticalscaling-ptpn8 patched `kbcli cluster delete-ops --name elastics-yurtvt-verticalscaling-ptpn8 --force --auto-approve --namespace ns-bpqhv ` OpsRequest elastics-yurtvt-verticalscaling-ptpn8 deleted No resources found in ns-bpqhv namespace. check db_client batch data count `echo "curl -X GET 'elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/executions_loop_index/_search' -H 'Content-Type: application/json' -d '***\"size\": 0,\"track_total_hits\": true***' " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update elastics-yurtvt --termination-policy=WipeOut --namespace ns-bpqhv ` cluster.apps.kubeblocks.io/elastics-yurtvt updated (no change) check cluster status `kbcli cluster list elastics-yurtvt --show-labels --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS elastics-yurtvt ns-bpqhv WipeOut Running May 28,2025 11:34 UTC+0800 app.kubernetes.io/instance=elastics-yurtvt check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances elastics-yurtvt --namespace ns-bpqhv ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME elastics-yurtvt-data-0 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-9-211.us-west-2.compute.internal/172.31.9.211 May 28,2025 12:56 UTC+0800 elastics-yurtvt-data-1 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-135.us-west-2.compute.internal/172.31.2.135 May 28,2025 12:54 UTC+0800 elastics-yurtvt-data-2 ns-bpqhv elastics-yurtvt data Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:22Gi ip-172-31-2-224.us-west-2.compute.internal/172.31.2.224 May 28,2025 12:52 UTC+0800 elastics-yurtvt-master-0 ns-bpqhv elastics-yurtvt master Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:24Gi ip-172-31-11-41.us-west-2.compute.internal/172.31.11.41 May 28,2025 13:03 UTC+0800 elastics-yurtvt-master-1 ns-bpqhv elastics-yurtvt master Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:24Gi ip-172-31-5-104.us-west-2.compute.internal/172.31.5.104 May 28,2025 13:01 UTC+0800 elastics-yurtvt-master-2 ns-bpqhv elastics-yurtvt master Running us-west-2a 600m / 600m 2254857830400m / 2254857830400m data:24Gi ip-172-31-11-213.us-west-2.compute.internal/172.31.11.213 May 28,2025 12:59 UTC+0800 check pod status done No resources found in ns-bpqhv namespace. check cluster connect `echo "curl http://elastics-yurtvt-master-http.ns-bpqhv.svc.cluster.local:9200/_cluster/health\?pretty " | kubectl exec -it elastics-yurtvt-master-0 --namespace ns-bpqhv -- sh` check cluster connect done cluster list-logs `kbcli cluster list-logs elastics-yurtvt --namespace ns-bpqhv ` No log files found. Error from server (NotFound): pods "elastics-yurtvt-master-0" not found cluster logs `kbcli cluster logs elastics-yurtvt --tail 30 --namespace ns-bpqhv ` Defaulted container "elasticsearch" out of: elasticsearch, exporter, kbagent, prepare-plugins (init), install-plugins (init), init-kbagent (init), kbagent-worker (init) "at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.10.1.jar:7.10.1]", "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]", "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]", "at java.lang.Thread.run(Thread.java:832) [?:?]"] *** ***"type": "server", "timestamp": "2025-05-28T04:59:45,616Z", "level": "INFO", "component": "o.e.c.c.JoinHelper", "cluster.name": "ns-bpqhv", "node.name": "elastics-yurtvt-data-0", "message": "failed to join ***elastics-yurtvt-data-0***dPdoiq5mQz2d6Vm-2FMrlA***esuSnIYZS226I_lgA3PCsA***172.31.14.201***172.31.14.201:9300***cdhimrstw***k8s_node_name=ip-172-31-9-211.us-west-2.compute.internal, xpack.installed=true, transform.node=true*** with JoinRequest***sourceNode=***elastics-yurtvt-data-0***dPdoiq5mQz2d6Vm-2FMrlA***esuSnIYZS226I_lgA3PCsA***172.31.14.201***172.31.14.201:9300***cdhimrstw***k8s_node_name=ip-172-31-9-211.us-west-2.compute.internal, xpack.installed=true, transform.node=true***, minimumTerm=50, optionalJoin=Optional[Join***term=51, lastAcceptedTerm=47, lastAcceptedVersion=304, sourceNode=***elastics-yurtvt-data-0***dPdoiq5mQz2d6Vm-2FMrlA***esuSnIYZS226I_lgA3PCsA***172.31.14.201***172.31.14.201:9300***cdhimrstw***k8s_node_name=ip-172-31-9-211.us-west-2.compute.internal, xpack.installed=true, transform.node=true***, targetNode=***elastics-yurtvt-data-0***dPdoiq5mQz2d6Vm-2FMrlA***esuSnIYZS226I_lgA3PCsA***172.31.14.201***172.31.14.201:9300***cdhimrstw***k8s_node_name=ip-172-31-9-211.us-west-2.compute.internal, xpack.installed=true, transform.node=true***]***", "cluster.uuid": "MISF6YoaSH6zn5SKG4dW-Q", "node.id": "dPdoiq5mQz2d6Vm-2FMrlA" , "stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [elastics-yurtvt-data-0][172.31.14.201:9300][internal:cluster/coordination/join]", "Caused by: org.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: node is no longer master for term 51 while handling publication", "at org.elasticsearch.cluster.coordination.Coordinator.publish(Coordinator.java:1083) ~[elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.cluster.service.MasterService.publish(MasterService.java:268) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:250) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.10.1.jar:7.10.1]", "at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.10.1.jar:7.10.1]", "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]", "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]", "at java.lang.Thread.run(Thread.java:832) [?:?]"] *** ***"type": "server", "timestamp": "2025-05-28T04:59:45,682Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "ns-bpqhv", "node.name": "elastics-yurtvt-data-0", "message": "master node changed ***previous [], current [***elastics-yurtvt-data-2***ECJI-AvTTSO9EOYnub-J-Q***FwP4IcK_SgO476tE5keAGA***172.31.1.87***172.31.1.87:9300***cdhimrstw***k8s_node_name=ip-172-31-2-224.us-west-2.compute.internal, xpack.installed=true, transform.node=true***]***, term: 52, version: 305, reason: ApplyCommitRequest***term=52, version=305, sourceNode=***elastics-yurtvt-data-2***ECJI-AvTTSO9EOYnub-J-Q***FwP4IcK_SgO476tE5keAGA***172.31.1.87***172.31.1.87:9300***cdhimrstw***k8s_node_name=ip-172-31-2-224.us-west-2.compute.internal, xpack.installed=true, transform.node=true***", "cluster.uuid": "MISF6YoaSH6zn5SKG4dW-Q", "node.id": "dPdoiq5mQz2d6Vm-2FMrlA" *** ***"type": "server", "timestamp": "2025-05-28T04:59:46,294Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "ns-bpqhv", "node.name": "elastics-yurtvt-data-0", "message": "removed ***elastics-yurtvt-master-2***2YvLi8YyQgua_VwKrrudTg***5m5E-3gkQbWCoGOLxEbZIw***172.31.7.203***172.31.7.203:9300***cdhimrstw***k8s_node_name=ip-172-31-11-213.us-west-2.compute.internal, xpack.installed=true, transform.node=true***, term: 52, version: 306, reason: ApplyCommitRequest***term=52, version=306, sourceNode=***elastics-yurtvt-data-2***ECJI-AvTTSO9EOYnub-J-Q***FwP4IcK_SgO476tE5keAGA***172.31.1.87***172.31.1.87:9300***cdhimrstw***k8s_node_name=ip-172-31-2-224.us-west-2.compute.internal, xpack.installed=true, transform.node=true***", "cluster.uuid": "MISF6YoaSH6zn5SKG4dW-Q", "node.id": "dPdoiq5mQz2d6Vm-2FMrlA" *** ***"type": "server", "timestamp": "2025-05-28T05:01:05,522Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "ns-bpqhv", "node.name": "elastics-yurtvt-data-0", "message": "added ***elastics-yurtvt-master-2***2YvLi8YyQgua_VwKrrudTg***6xSPsNdtSFqiUfgRfrMwQQ***172.31.0.22***172.31.0.22:9300***cdhimrstw***k8s_node_name=ip-172-31-11-213.us-west-2.compute.internal, xpack.installed=true, transform.node=true***, term: 52, version: 309, reason: ApplyCommitRequest***term=52, version=309, sourceNode=***elastics-yurtvt-data-2***ECJI-AvTTSO9EOYnub-J-Q***FwP4IcK_SgO476tE5keAGA***172.31.1.87***172.31.1.87:9300***cdhimrstw***k8s_node_name=ip-172-31-2-224.us-west-2.compute.internal, xpack.installed=true, transform.node=true***", "cluster.uuid": "MISF6YoaSH6zn5SKG4dW-Q", "node.id": "dPdoiq5mQz2d6Vm-2FMrlA" *** ***"type": "server", "timestamp": "2025-05-28T05:01:44,066Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "ns-bpqhv", "node.name": "elastics-yurtvt-data-0", "message": "removed ***elastics-yurtvt-master-1***EVnMaDy2Toab2it1zoSNSw***tza0zknlRaKzC4B0dH_c0w***172.31.9.247***172.31.9.247:9300***cdhimrstw***k8s_node_name=ip-172-31-5-104.us-west-2.compute.internal, xpack.installed=true, transform.node=true***, term: 52, version: 310, reason: ApplyCommitRequest***term=52, version=310, sourceNode=***elastics-yurtvt-data-2***ECJI-AvTTSO9EOYnub-J-Q***FwP4IcK_SgO476tE5keAGA***172.31.1.87***172.31.1.87:9300***cdhimrstw***k8s_node_name=ip-172-31-2-224.us-west-2.compute.internal, xpack.installed=true, transform.node=true***", "cluster.uuid": "MISF6YoaSH6zn5SKG4dW-Q", "node.id": "dPdoiq5mQz2d6Vm-2FMrlA" *** ***"type": "server", "timestamp": "2025-05-28T05:02:58,120Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "ns-bpqhv", "node.name": "elastics-yurtvt-data-0", "message": "added ***elastics-yurtvt-master-1***EVnMaDy2Toab2it1zoSNSw***RwWJdLD7R1GFZrcqQpBwBw***172.31.5.142***172.31.5.142:9300***cdhimrstw***k8s_node_name=ip-172-31-5-104.us-west-2.compute.internal, xpack.installed=true, transform.node=true***, term: 52, version: 315, reason: ApplyCommitRequest***term=52, version=315, sourceNode=***elastics-yurtvt-data-2***ECJI-AvTTSO9EOYnub-J-Q***FwP4IcK_SgO476tE5keAGA***172.31.1.87***172.31.1.87:9300***cdhimrstw***k8s_node_name=ip-172-31-2-224.us-west-2.compute.internal, xpack.installed=true, transform.node=true***", "cluster.uuid": "MISF6YoaSH6zn5SKG4dW-Q", "node.id": "dPdoiq5mQz2d6Vm-2FMrlA" *** ***"type": "server", "timestamp": "2025-05-28T05:03:34,581Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "ns-bpqhv", "node.name": "elastics-yurtvt-data-0", "message": "removed ***elastics-yurtvt-master-0***U9g0Z9VBSYmezXd1qnbTQA***p8H_r6XCQwmQ85JZAuP9pw***172.31.12.121***172.31.12.121:9300***cdhimrstw***k8s_node_name=ip-172-31-11-41.us-west-2.compute.internal, xpack.installed=true, transform.node=true***, term: 52, version: 316, reason: ApplyCommitRequest***term=52, version=316, sourceNode=***elastics-yurtvt-data-2***ECJI-AvTTSO9EOYnub-J-Q***FwP4IcK_SgO476tE5keAGA***172.31.1.87***172.31.1.87:9300***cdhimrstw***k8s_node_name=ip-172-31-2-224.us-west-2.compute.internal, xpack.installed=true, transform.node=true***", "cluster.uuid": "MISF6YoaSH6zn5SKG4dW-Q", "node.id": "dPdoiq5mQz2d6Vm-2FMrlA" *** ***"type": "server", "timestamp": "2025-05-28T05:04:42,374Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "ns-bpqhv", "node.name": "elastics-yurtvt-data-0", "message": "added ***elastics-yurtvt-master-0***U9g0Z9VBSYmezXd1qnbTQA***qW7ntoGuQoqsqSoImu7xEA***172.31.0.141***172.31.0.141:9300***cdhimrstw***k8s_node_name=ip-172-31-11-41.us-west-2.compute.internal, xpack.installed=true, transform.node=true***, term: 52, version: 321, reason: ApplyCommitRequest***term=52, version=321, sourceNode=***elastics-yurtvt-data-2***ECJI-AvTTSO9EOYnub-J-Q***FwP4IcK_SgO476tE5keAGA***172.31.1.87***172.31.1.87:9300***cdhimrstw***k8s_node_name=ip-172-31-2-224.us-west-2.compute.internal, xpack.installed=true, transform.node=true***", "cluster.uuid": "MISF6YoaSH6zn5SKG4dW-Q", "node.id": "dPdoiq5mQz2d6Vm-2FMrlA" *** delete cluster elastics-yurtvt `kbcli cluster delete elastics-yurtvt --auto-approve --namespace ns-bpqhv ` Cluster elastics-yurtvt deleted pod_info:elastics-yurtvt-data-0 3/3 Terminating 0 9m15s elastics-yurtvt-data-1 3/3 Terminating 0 10m elastics-yurtvt-data-2 3/3 Terminating 0 12m elastics-yurtvt-master-0 3/3 Terminating 0 2m1s elastics-yurtvt-master-1 3/3 Terminating 0 3m51s elastics-yurtvt-master-2 3/3 Terminating 0 5m51s pod_info:elastics-yurtvt-data-0 2/3 Terminating 0 9m36s elastics-yurtvt-data-1 2/3 Terminating 0 11m elastics-yurtvt-data-2 2/3 Terminating 0 13m elastics-yurtvt-master-0 2/3 Terminating 0 2m22s elastics-yurtvt-master-1 2/3 Terminating 0 4m12s elastics-yurtvt-master-2 2/3 Terminating 0 6m12s No resources found in ns-bpqhv namespace. delete cluster pod done No resources found in ns-bpqhv namespace. check cluster resource non-exist OK: pvc No resources found in ns-bpqhv namespace. delete cluster done No resources found in ns-bpqhv namespace. No resources found in ns-bpqhv namespace. No resources found in ns-bpqhv namespace. ElasticSearch Test Suite All Done! --------------------------------------ElasticSearch (Topology = multi-node Replicas 3) Test Result-------------------------------------- [PASSED]|[Create]|[ComponentDefinition=elasticsearch-7-1.0.0-alpha.0;ComponentVersion=elasticsearch;ServiceVersion=7.7.1;]|[Description=Create a cluster with the specified component definition elasticsearch-7-1.0.0-alpha.0 and component version elasticsearch and service version 7.7.1] [PASSED]|[Connect]|[ComponentName=master]|[Description=Connect to the cluster] [PASSED]|[AddData]|[Values=vaehx]|[Description=Add data to the cluster] [PASSED]|[VolumeExpansion]|[ComponentName=master]|[Description=VolumeExpansion the cluster specify component master] [PASSED]|[Upgrade]|[ComponentName=master,data;ComponentVersionFrom=7.7.1;ComponentVersionTo=7.8.1]|[Description=Upgrade the cluster specify component master,data service version from 7.7.1 to 7.8.1] [PASSED]|[Upgrade]|[ComponentName=master,data;ComponentVersionFrom=7.8.1;ComponentVersionTo=7.10.1]|[Description=Upgrade the cluster specify component master,data service version from 7.8.1 to 7.10.1] [PASSED]|[HorizontalScaling Out]|[ComponentName=data]|[Description=HorizontalScaling Out the cluster specify component data] [PASSED]|[HorizontalScaling In]|[ComponentName=data]|[Description=HorizontalScaling In the cluster specify component data] [PASSED]|[Restart]|[ComponentName=data]|[Description=Restart the cluster specify component data] [PASSED]|[HscaleOfflineInstances]|[ComponentName=master]|[Description=Hscale the cluster instances offline specify component master] [PASSED]|[HscaleOnlineInstances]|[ComponentName=master]|[Description=Hscale the cluster instances online specify component master] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[Failover]|[HA=Connection Stress;ComponentName=master]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[VolumeExpansion]|[ComponentName=data]|[Description=VolumeExpansion the cluster specify component data] [PASSED]|[HorizontalScaling Out]|[ComponentName=master]|[Description=HorizontalScaling Out the cluster specify component master] [PASSED]|[HorizontalScaling In]|[ComponentName=master]|[Description=HorizontalScaling In the cluster specify component master] [PASSED]|[VerticalScaling]|[ComponentName=data]|[Description=VerticalScaling the cluster specify component data] [PASSED]|[Restart]|[ComponentName=master]|[Description=Restart the cluster specify component master] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[VerticalScaling]|[ComponentName=master]|[Description=VerticalScaling the cluster specify component master] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]