bash test/kbcli/test_kbcli_0.9.sh --type 28 --version 0.9.5 --generate-output true --chaos-mesh true --drain-node true --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-gqdda ` `kubectl create namespace ns-gqdda` namespace/ns-gqdda created create namespace ns-gqdda done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "0.9" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v0.9.5-beta.8` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 32.1M 100 32.1M 0 0 98.6M 0 --:--:-- --:--:-- --:--:-- 98.6M kbcli installed successfully. Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v0.9.5-beta.8 done Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Kubernetes Env: v1.32.6 POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default kubeblocks version is:0.9.5 skip upgrade kubeblocks Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check cluster definition check component definition check component definition set component name:etcd LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 1 No resources found in ns-gqdda namespace. create 1 replica WipeOut etcd cluster check component definition set component definition by component version no component definitions found apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: etcdm-udrdxe namespace: ns-gqdda spec: terminationPolicy: WipeOut componentSpecs: - name: etcd componentDef: etcd serviceVersion: 3.6.1 replicas: 1 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi services: - name: client serviceName: client spec: type: NodePort ports: - port: 2379 targetPort: 2379 componentSelector: etcd roleSelector: leader `kubectl apply -f test_create_etcdm-udrdxe.yaml` cluster.apps.kubeblocks.io/etcdm-udrdxe created apply test_create_etcdm-udrdxe.yaml Success `rm -rf test_create_etcdm-udrdxe.yaml` check cluster status `kbcli cluster list etcdm-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS etcdm-udrdxe ns-gqdda WipeOut Sep 01,2025 11:19 UTC+0800 cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances etcdm-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME etcdm-udrdxe-etcd-0 ns-gqdda etcdm-udrdxe etcd Running leader 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-15164480-vmss000000/10.224.0.5 Sep 01,2025 11:19 UTC+0800 check pod status done No resources found in ns-gqdda namespace. check cluster connect `echo 'etcdctl --endpoints=http://etcdm-udrdxe-client.ns-gqdda.svc.cluster.local:2379 endpoint health' | kubectl exec -it etcdm-udrdxe-etcd-0 --namespace ns-gqdda -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=etcdm-udrdxe` No resources found in ns-gqdda namespace. Not found cluster secret DB_USERNAME:;DB_PASSWORD:;DB_PORT:2379;DB_DATABASE: There is no password in Type: 15. check component definition set component name:kafka-combine LIMIT_CPU:0.5 LIMIT_MEMORY:1 storage size: 5 No resources found in ns-gqdda namespace. create 1 replica WipeOut kafka cluster check cluster definition apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: kafkam-udrdxe namespace: ns-gqdda annotations: "kubeblocks.io/extra-env": '***"KB_KAFKA_ENABLE_SASL":"false","KB_KAFKA_BROKER_HEAP":"-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64","KB_KAFKA_CONTROLLER_HEAP":"-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64","KB_KAFKA_PUBLIC_ACCESS":"false"***' spec: clusterDefinitionRef: kafka topology: combined terminationPolicy: WipeOut componentSpecs: - name: kafka-combine tls: false monitor: true replicas: 1 serviceVersion: 3.3.2 services: - name: advertised-listener serviceType: ClusterIP podService: true resources: requests: cpu: 500m memory: 1Gi limits: cpu: 500m memory: 1Gi env: - name: KB_BROKER_DIRECT_POD_ACCESS value: "false" - name: KB_KAFKA_ENABLE_SASL_SCRAM value: "false" volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi - name: metadata spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi `kubectl apply -f test_create_kafkam-udrdxe.yaml` cluster.apps.kubeblocks.io/kafkam-udrdxe created apply test_create_kafkam-udrdxe.yaml Success `rm -rf test_create_kafkam-udrdxe.yaml` check cluster status `kbcli cluster list kafkam-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS kafkam-udrdxe ns-gqdda kafka WipeOut Sep 01,2025 11:25 UTC+0800 clusterdefinition.kubeblocks.io/name=kafka,clusterversion.kubeblocks.io/name= cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances kafkam-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME kafkam-udrdxe-kafka-combine-0 ns-gqdda kafkam-udrdxe kafka-combine Running 0 500m / 500m 1Gi / 1Gi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:25 UTC+0800 metadata:5Gi check pod status done connect unsupported engine type: kafka kafka `kubectl get secrets -l app.kubernetes.io/instance=kafkam-udrdxe` No resources found in ns-gqdda namespace. Not found cluster secret DB_USERNAME:;DB_PASSWORD:;DB_PORT:9092;DB_DATABASE: There is no password in Type: 7. check cluster definition set component name:milvus set component version set component version:milvus set service versions:2.3.2,2.5.13 set service versions sorted:2.3.2,2.5.13 no cluster version found unsupported component definition REPORT_COUNT 0:0 set replicas first:1,2.3.2|1,2.5.13 set replicas third:1,2.5.13 set replicas fourth:1,2.5.13 set minimum cmpv service version set minimum cmpv service version replicas:1,2.5.13 REPORT_COUNT:1 CLUSTER_TOPOLOGY: set cluster topology: standalone LIMIT_CPU:0.5 LIMIT_MEMORY:0.5 storage size: 5 No resources found in ns-gqdda namespace. termination_policy:WipeOut create 1 replica WipeOut milvus cluster check cluster definition check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: milvus-standalone-0.9.1 by component version:milvus apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: milvus-udrdxe namespace: ns-gqdda spec: clusterDefinitionRef: milvus topology: standalone terminationPolicy: WipeOut services: - name: proxy serviceName: proxy componentSelector: milvus spec: type: ClusterIP ports: - name: milvus port: 19530 protocol: TCP targetPort: milvus componentSpecs: - name: milvus disableExporter: true replicas: 1 resources: requests: cpu: 500m memory: 0.5Gi limits: cpu: 500m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi - name: etcd replicas: 1 resources: limits: cpu: 500m memory: 0.5Gi requests: cpu: 500m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi - name: minio replicas: 1 resources: limits: cpu: 500m memory: 0.5Gi requests: cpu: 500m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi `kubectl apply -f test_create_milvus-udrdxe.yaml` cluster.apps.kubeblocks.io/milvus-udrdxe created apply test_create_milvus-udrdxe.yaml Success `rm -rf test_create_milvus-udrdxe.yaml` check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut ConditionsError Sep 01,2025 11:29 UTC+0800 clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:29 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:30 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:29 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 `kubectl get secrets -l app.kubernetes.io/instance=milvus-udrdxe` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.port***"` DB_USERNAME:admin;DB_PASSWORD:ydoHwojKB0W562zw;DB_PORT:19530;DB_DATABASE: check pod milvus-udrdxe-milvus-0 container_name milvus exist password ydoHwojKB0W562zw No container logs contain secret password. describe cluster `kbcli cluster describe milvus-udrdxe --namespace ns-gqdda ` Name: milvus-udrdxe Created Time: Sep 01,2025 11:29 UTC+0800 NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY ns-gqdda milvus Running WipeOut Endpoints: COMPONENT MODE INTERNAL EXTERNAL Topology: COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME etcd milvus-udrdxe-etcd-0 leader Running 0 aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:29 UTC+0800 milvus milvus-udrdxe-milvus-0 Running 0 aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:30 UTC+0800 minio milvus-udrdxe-minio-0 Running 0 aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:29 UTC+0800 Resources Allocation: COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS etcd false 500m / 500m 512Mi / 512Mi data:5Gi default minio false 500m / 500m 512Mi / 512Mi data:5Gi default milvus false 500m / 500m 512Mi / 512Mi data:5Gi default Images: COMPONENT TYPE IMAGE etcd docker.io/apecloud/etcd:v3.6.1 minio docker.io/apecloud/minio:RELEASE.2022-03-17T06-34-49Z milvus docker.io/apecloud/milvus:v2.5.13 Show cluster events: kbcli cluster list-events -n ns-gqdda milvus-udrdxe `kbcli cluster label milvus-udrdxe app.kubernetes.io/instance- --namespace ns-gqdda ` label "app.kubernetes.io/instance" not found. `kbcli cluster label milvus-udrdxe app.kubernetes.io/instance=milvus-udrdxe --namespace ns-gqdda ` `kbcli cluster label milvus-udrdxe --list --namespace ns-gqdda ` NAME NAMESPACE LABELS milvus-udrdxe ns-gqdda app.kubernetes.io/instance=milvus-udrdxe clusterdefinition.kubeblocks.io/name=milvus clusterversion.kubeblocks.io/name= label cluster app.kubernetes.io/instance=milvus-udrdxe Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=milvus-udrdxe --namespace ns-gqdda ` `kbcli cluster label milvus-udrdxe --list --namespace ns-gqdda ` NAME NAMESPACE LABELS milvus-udrdxe ns-gqdda app.kubernetes.io/instance=milvus-udrdxe case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=milvus clusterversion.kubeblocks.io/name= label cluster case.name=kbcli.test1 Success `kbcli cluster label milvus-udrdxe case.name=kbcli.test2 --overwrite --namespace ns-gqdda ` `kbcli cluster label milvus-udrdxe --list --namespace ns-gqdda ` NAME NAMESPACE LABELS milvus-udrdxe ns-gqdda app.kubernetes.io/instance=milvus-udrdxe case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=milvus clusterversion.kubeblocks.io/name= label cluster case.name=kbcli.test2 Success `kbcli cluster label milvus-udrdxe case.name- --namespace ns-gqdda ` `kbcli cluster label milvus-udrdxe --list --namespace ns-gqdda ` NAME NAMESPACE LABELS milvus-udrdxe ns-gqdda app.kubernetes.io/instance=milvus-udrdxe clusterdefinition.kubeblocks.io/name=milvus clusterversion.kubeblocks.io/name= delete cluster label case.name Success cluster connect insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-milvus-udrdxe" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-milvus-udrdxe --namespace ns-gqdda ` Error from server (NotFound): pods "test-db-client-executionloop-milvus-udrdxe" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-milvus-udrdxe" not found `kubectl get secrets -l app.kubernetes.io/instance=milvus-udrdxe` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.port***"` DB_USERNAME:admin;DB_PASSWORD:ydoHwojKB0W562zw;DB_PORT:19530;DB_DATABASE: No resources found in ns-gqdda namespace. apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-milvus-udrdxe namespace: ns-gqdda spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local" - "--user" - "admin" - "--password" - "ydoHwojKB0W562zw" - "--port" - "19530" - "--dbtype" - "milvus" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-milvus-udrdxe.yaml` pod/test-db-client-executionloop-milvus-udrdxe created apply test-db-client-executionloop-milvus-udrdxe.yaml Success `rm -rf test-db-client-executionloop-milvus-udrdxe.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 10s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 15s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 20s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 26s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 31s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 36s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 41s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 47s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 52s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 57s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 1/1 Running 0 62s check pod test-db-client-executionloop-milvus-udrdxe status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-milvus-udrdxe 0/1 Completed 0 68s check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Running Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:29 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:30 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:29 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 --host milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local --user admin --password ydoHwojKB0W562zw --port 19530 --dbtype milvus --test executionloop --duration 60 --interval 1 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] Execution loop start: Collection executions_loop_collection does not exist. Creating collection... Collection executions_loop_collection created successfully. Execution loop start: insert:executions_loop_collection:10::1:executions_loop_1 [ 1s ] executions total: 239 successful: 239 failed: 0 disconnect: 0 [ 2s ] executions total: 580 successful: 580 failed: 0 disconnect: 0 [ 3s ] executions total: 953 successful: 953 failed: 0 disconnect: 0 [ 4s ] executions total: 1315 successful: 1315 failed: 0 disconnect: 0 [ 5s ] executions total: 1694 successful: 1694 failed: 0 disconnect: 0 [ 6s ] executions total: 2075 successful: 2075 failed: 0 disconnect: 0 [ 7s ] executions total: 2452 successful: 2452 failed: 0 disconnect: 0 [ 8s ] executions total: 2829 successful: 2829 failed: 0 disconnect: 0 [ 9s ] executions total: 3161 successful: 3161 failed: 0 disconnect: 0 [ 10s ] executions total: 3528 successful: 3528 failed: 0 disconnect: 0 [ 11s ] executions total: 3905 successful: 3905 failed: 0 disconnect: 0 [ 12s ] executions total: 4283 successful: 4283 failed: 0 disconnect: 0 [ 13s ] executions total: 4646 successful: 4646 failed: 0 disconnect: 0 [ 14s ] executions total: 5007 successful: 5007 failed: 0 disconnect: 0 [ 15s ] executions total: 5347 successful: 5347 failed: 0 disconnect: 0 [ 16s ] executions total: 5723 successful: 5723 failed: 0 disconnect: 0 [ 17s ] executions total: 6119 successful: 6119 failed: 0 disconnect: 0 [ 18s ] executions total: 6490 successful: 6490 failed: 0 disconnect: 0 [ 19s ] executions total: 6864 successful: 6864 failed: 0 disconnect: 0 [ 20s ] executions total: 7257 successful: 7257 failed: 0 disconnect: 0 [ 21s ] executions total: 7631 successful: 7631 failed: 0 disconnect: 0 [ 22s ] executions total: 8040 successful: 8040 failed: 0 disconnect: 0 [ 23s ] executions total: 8395 successful: 8395 failed: 0 disconnect: 0 [ 24s ] executions total: 8778 successful: 8778 failed: 0 disconnect: 0 [ 25s ] executions total: 9155 successful: 9155 failed: 0 disconnect: 0 [ 26s ] executions total: 9530 successful: 9530 failed: 0 disconnect: 0 [ 27s ] executions total: 9913 successful: 9913 failed: 0 disconnect: 0 [ 28s ] executions total: 10294 successful: 10294 failed: 0 disconnect: 0 [ 29s ] executions total: 10665 successful: 10665 failed: 0 disconnect: 0 [ 30s ] executions total: 11027 successful: 11027 failed: 0 disconnect: 0 [ 31s ] executions total: 11392 successful: 11392 failed: 0 disconnect: 0 [ 32s ] executions total: 11771 successful: 11771 failed: 0 disconnect: 0 [ 33s ] executions total: 12177 successful: 12177 failed: 0 disconnect: 0 [ 34s ] executions total: 12546 successful: 12546 failed: 0 disconnect: 0 [ 35s ] executions total: 12935 successful: 12935 failed: 0 disconnect: 0 [ 36s ] executions total: 13319 successful: 13319 failed: 0 disconnect: 0 [ 37s ] executions total: 13676 successful: 13676 failed: 0 disconnect: 0 [ 38s ] executions total: 14028 successful: 14028 failed: 0 disconnect: 0 [ 39s ] executions total: 14433 successful: 14433 failed: 0 disconnect: 0 [ 40s ] executions total: 14779 successful: 14779 failed: 0 disconnect: 0 [ 41s ] executions total: 15168 successful: 15168 failed: 0 disconnect: 0 [ 42s ] executions total: 15566 successful: 15566 failed: 0 disconnect: 0 [ 43s ] executions total: 15939 successful: 15939 failed: 0 disconnect: 0 [ 44s ] executions total: 16354 successful: 16354 failed: 0 disconnect: 0 [ 45s ] executions total: 16734 successful: 16734 failed: 0 disconnect: 0 [ 46s ] executions total: 17109 successful: 17109 failed: 0 disconnect: 0 [ 47s ] executions total: 17466 successful: 17466 failed: 0 disconnect: 0 [ 48s ] executions total: 17823 successful: 17823 failed: 0 disconnect: 0 [ 49s ] executions total: 18182 successful: 18182 failed: 0 disconnect: 0 [ 50s ] executions total: 18542 successful: 18542 failed: 0 disconnect: 0 [ 51s ] executions total: 18920 successful: 18920 failed: 0 disconnect: 0 [ 52s ] executions total: 19297 successful: 19297 failed: 0 disconnect: 0 [ 53s ] executions total: 19679 successful: 19679 failed: 0 disconnect: 0 [ 54s ] executions total: 20077 successful: 20077 failed: 0 disconnect: 0 [ 55s ] executions total: 20446 successful: 20446 failed: 0 disconnect: 0 [ 56s ] executions total: 20819 successful: 20819 failed: 0 disconnect: 0 [ 57s ] executions total: 21192 successful: 21192 failed: 0 disconnect: 0 [ 58s ] executions total: 21558 successful: 21558 failed: 0 disconnect: 0 [ 59s ] executions total: 21939 successful: 21939 failed: 0 disconnect: 0 [ 60s ] executions total: 22314 successful: 22314 failed: 0 disconnect: 0 Test Result: Total Executions: 22314 Successful Executions: 22314 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: milvus Host: milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local Port: 19530 Database: Table: User: admin Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 22314 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-milvus-udrdxe --namespace ns-gqdda ` pod/test-db-client-executionloop-milvus-udrdxe patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-milvus-udrdxe" force deleted check component etcd exists `kubectl get components -l app.kubernetes.io/instance=milvus-udrdxe,apps.kubeblocks.io/component-name=etcd --namespace ns-gqdda | (grep "etcd" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale milvus-udrdxe --auto-approve --force=true --components etcd --cpu 600m --memory 0.6Gi --namespace ns-gqdda ` OpsRequest milvus-udrdxe-verticalscaling-2w2zw created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-verticalscaling-2w2zw -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-verticalscaling-2w2zw ns-gqdda VerticalScaling milvus-udrdxe etcd Running -/- Sep 01,2025 11:34 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:34 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:30 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:29 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-verticalscaling-2w2zw ns-gqdda VerticalScaling milvus-udrdxe etcd Succeed 1/1 Sep 01,2025 11:34 UTC+0800 check ops status done ops_status:milvus-udrdxe-verticalscaling-2w2zw ns-gqdda VerticalScaling milvus-udrdxe etcd Succeed 1/1 Sep 01,2025 11:34 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-verticalscaling-2w2zw --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-verticalscaling-2w2zw patched `kbcli cluster delete-ops --name milvus-udrdxe-verticalscaling-2w2zw --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-verticalscaling-2w2zw deleted No resources found in ns-gqdda namespace. check db_client batch data count `echo "curl -s -H 'Content-Type: application/json' -X POST http://milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local:19530/v1/vector/query -d '***\"collectionName\":\"executions_loop_collection\",\"filter\":\"id == 22314\",\"limit\":0,\"outputFields\":[\"id\"]***' " | kubectl exec -it milvus-udrdxe-milvus-0 --namespace ns-gqdda -- bash` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart milvus-udrdxe --auto-approve --force=true --components milvus --namespace ns-gqdda ` OpsRequest milvus-udrdxe-restart-864s7 created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-restart-864s7 -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-restart-864s7 ns-gqdda Restart milvus-udrdxe milvus Creating -/- Sep 01,2025 11:35 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:34 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:35 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:29 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-restart-864s7 ns-gqdda Restart milvus-udrdxe milvus Succeed 1/1 Sep 01,2025 11:35 UTC+0800 check ops status done ops_status:milvus-udrdxe-restart-864s7 ns-gqdda Restart milvus-udrdxe milvus Succeed 1/1 Sep 01,2025 11:35 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-restart-864s7 --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-restart-864s7 patched `kbcli cluster delete-ops --name milvus-udrdxe-restart-864s7 --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-restart-864s7 deleted No resources found in ns-gqdda namespace. check db_client batch data count `echo "curl -s -H 'Content-Type: application/json' -X POST http://milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local:19530/v1/vector/query -d '***\"collectionName\":\"executions_loop_collection\",\"filter\":\"id == 22314\",\"limit\":0,\"outputFields\":[\"id\"]***' " | kubectl exec -it milvus-udrdxe-milvus-0 --namespace ns-gqdda -- bash` check db_client batch data Success check component minio exists `kubectl get components -l app.kubernetes.io/instance=milvus-udrdxe,apps.kubeblocks.io/component-name=minio --namespace ns-gqdda | (grep "minio" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale milvus-udrdxe --auto-approve --force=true --components minio --cpu 600m --memory 0.6Gi --namespace ns-gqdda ` OpsRequest milvus-udrdxe-verticalscaling-bxgwt created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-verticalscaling-bxgwt -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-verticalscaling-bxgwt ns-gqdda VerticalScaling milvus-udrdxe minio Running -/- Sep 01,2025 11:37 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:34 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:35 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 600m / 600m 644245094400m / 644245094400m data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:37 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-verticalscaling-bxgwt ns-gqdda VerticalScaling milvus-udrdxe minio Succeed 1/1 Sep 01,2025 11:37 UTC+0800 check ops status done ops_status:milvus-udrdxe-verticalscaling-bxgwt ns-gqdda VerticalScaling milvus-udrdxe minio Succeed 1/1 Sep 01,2025 11:37 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-verticalscaling-bxgwt --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-verticalscaling-bxgwt patched `kbcli cluster delete-ops --name milvus-udrdxe-verticalscaling-bxgwt --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-verticalscaling-bxgwt deleted No resources found in ns-gqdda namespace. check db_client batch data count `echo "curl -s -H 'Content-Type: application/json' -X POST http://milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local:19530/v1/vector/query -d '***\"collectionName\":\"executions_loop_collection\",\"filter\":\"id == 22314\",\"limit\":0,\"outputFields\":[\"id\"]***' " | kubectl exec -it milvus-udrdxe-milvus-0 --namespace ns-gqdda -- bash` check db_client batch data Success check component etcd exists `kubectl get components -l app.kubernetes.io/instance=milvus-udrdxe,apps.kubeblocks.io/component-name=etcd --namespace ns-gqdda | (grep "etcd" || true )` check component minio exists `kubectl get components -l app.kubernetes.io/instance=milvus-udrdxe,apps.kubeblocks.io/component-name=minio --namespace ns-gqdda | (grep "minio" || true )` `kubectl get pvc -l app.kubernetes.io/instance=milvus-udrdxe,apps.kubeblocks.io/component-name=etcd,minio,apps.kubeblocks.io/vct-name=data --namespace ns-gqdda ` No resources found in ns-gqdda namespace. milvus-udrdxe etcd,minio data pvc is empty cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in milvus-udrdxe namespace. `kbcli cluster volume-expand milvus-udrdxe --auto-approve --force=true --components etcd,minio --volume-claim-templates data --storage 10Gi --namespace ns-gqdda ` OpsRequest milvus-udrdxe-volumeexpansion-8f8s7 created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-volumeexpansion-8f8s7 -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-volumeexpansion-8f8s7 ns-gqdda VolumeExpansion milvus-udrdxe etcd,minio Running 0/2 Sep 01,2025 11:37 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:34 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:35 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:37 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 No resources found in milvus-udrdxe namespace. check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-volumeexpansion-8f8s7 ns-gqdda VolumeExpansion milvus-udrdxe etcd,minio Succeed 2/2 Sep 01,2025 11:37 UTC+0800 check ops status done ops_status:milvus-udrdxe-volumeexpansion-8f8s7 ns-gqdda VolumeExpansion milvus-udrdxe etcd,minio Succeed 2/2 Sep 01,2025 11:37 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-volumeexpansion-8f8s7 --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-volumeexpansion-8f8s7 patched `kbcli cluster delete-ops --name milvus-udrdxe-volumeexpansion-8f8s7 --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-volumeexpansion-8f8s7 deleted No resources found in ns-gqdda namespace. check db_client batch data count `echo "curl -s -H 'Content-Type: application/json' -X POST http://milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local:19530/v1/vector/query -d '***\"collectionName\":\"executions_loop_collection\",\"filter\":\"id == 22314\",\"limit\":0,\"outputFields\":[\"id\"]***' " | kubectl exec -it milvus-udrdxe-milvus-0 --namespace ns-gqdda -- bash` check db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop milvus-udrdxe --auto-approve --force=true --namespace ns-gqdda ` OpsRequest milvus-udrdxe-stop-vl8bg created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-stop-vl8bg -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-stop-vl8bg ns-gqdda Stop milvus-udrdxe Creating -/- Sep 01,2025 11:41 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Stopping Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-stop-vl8bg ns-gqdda Stop milvus-udrdxe etcd,milvus,minio Succeed 3/3 Sep 01,2025 11:41 UTC+0800 check ops status done ops_status:milvus-udrdxe-stop-vl8bg ns-gqdda Stop milvus-udrdxe etcd,milvus,minio Succeed 3/3 Sep 01,2025 11:41 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-stop-vl8bg --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-stop-vl8bg patched `kbcli cluster delete-ops --name milvus-udrdxe-stop-vl8bg --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-stop-vl8bg deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start milvus-udrdxe --force=true --namespace ns-gqdda ` OpsRequest milvus-udrdxe-start-4vvpt created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-start-4vvpt -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-start-4vvpt ns-gqdda Start milvus-udrdxe Pending -/- Sep 01,2025 11:42 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Abnormal cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:42 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:42 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:42 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-start-4vvpt ns-gqdda Start milvus-udrdxe etcd,milvus,minio Succeed 3/3 Sep 01,2025 11:42 UTC+0800 check ops status done ops_status:milvus-udrdxe-start-4vvpt ns-gqdda Start milvus-udrdxe etcd,milvus,minio Succeed 3/3 Sep 01,2025 11:42 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-start-4vvpt --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-start-4vvpt patched `kbcli cluster delete-ops --name milvus-udrdxe-start-4vvpt --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-start-4vvpt deleted No resources found in ns-gqdda namespace. check db_client batch data count `echo "curl -s -H 'Content-Type: application/json' -X POST http://milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local:19530/v1/vector/query -d '***\"collectionName\":\"executions_loop_collection\",\"filter\":\"id == 22314\",\"limit\":0,\"outputFields\":[\"id\"]***' " | kubectl exec -it milvus-udrdxe-milvus-0 --namespace ns-gqdda -- bash` check db_client batch data Success test failover connectionstress check cluster status before cluster-failover-connectionstress check cluster status done cluster_status:Running check node drain check node drain success Error from server (NotFound): pods "test-db-client-connectionstress-milvus-udrdxe" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-milvus-udrdxe --namespace ns-gqdda ` Error from server (NotFound): pods "test-db-client-connectionstress-milvus-udrdxe" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-milvus-udrdxe" not found `kubectl get secrets -l app.kubernetes.io/instance=milvus-udrdxe` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.username***"` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.password***"` `kubectl get secrets milvus-udrdxe-minio-account-admin -o jsonpath="***.data.port***"` DB_USERNAME:admin;DB_PASSWORD:ydoHwojKB0W562zw;DB_PORT:19530;DB_DATABASE: No resources found in ns-gqdda namespace. apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-milvus-udrdxe namespace: ns-gqdda spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local" - "--user" - "admin" - "--password" - "ydoHwojKB0W562zw" - "--port" - "19530" - "--database" - "" - "--dbtype" - "milvus" - "--test" - "connectionstress" - "--connections" - "4096" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-milvus-udrdxe.yaml` pod/test-db-client-connectionstress-milvus-udrdxe created apply test-db-client-connectionstress-milvus-udrdxe.yaml Success `rm -rf test-db-client-connectionstress-milvus-udrdxe.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-milvus-udrdxe 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-milvus-udrdxe 1/1 Running 0 10s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-milvus-udrdxe 1/1 Running 0 15s check pod test-db-client-connectionstress-milvus-udrdxe status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-milvus-udrdxe 0/1 Completed 0 20s check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:42 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:42 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:42 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 --host milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local --user admin --password ydoHwojKB0W562zw --port 19530 --database --dbtype milvus --test connectionstress --connections 4096 --duration 60 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] Test execution failed: UNAVAILABLE: io exception io.grpc.StatusRuntimeException: UNAVAILABLE: io exception at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:268) at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:249) at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:167) at io.milvus.grpc.MilvusServiceGrpc$MilvusServiceBlockingStub.connect(MilvusServiceGrpc.java:5113) at io.milvus.v2.client.MilvusClientV2.connect(MilvusClientV2.java:152) at io.milvus.v2.client.MilvusClientV2.connect(MilvusClientV2.java:106) at io.milvus.v2.client.MilvusClientV2.(MilvusClientV2.java:85) at com.apecloud.dbtester.tester.MilvusTester.connect(MilvusTester.java:48) at com.apecloud.dbtester.tester.MilvusTester.connectionStress(MilvusTester.java:221) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:33) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: io.grpc.netty.shaded.io.netty.channel.unix.Errors$NativeIoException: recvAddress(..) failed: Connection reset by peer `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-milvus-udrdxe --namespace ns-gqdda ` pod/test-db-client-connectionstress-milvus-udrdxe patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-milvus-udrdxe" force deleted check failover pod name failover pod name:milvus-udrdxe-milvus-0 failover connectionstress Success No resources found in ns-gqdda namespace. check db_client batch data count `echo "curl -s -H 'Content-Type: application/json' -X POST http://milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local:19530/v1/vector/query -d '***\"collectionName\":\"executions_loop_collection\",\"filter\":\"id == 22314\",\"limit\":0,\"outputFields\":[\"id\"]***' " | kubectl exec -it milvus-udrdxe-milvus-0 --namespace ns-gqdda -- bash` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart milvus-udrdxe --auto-approve --force=true --namespace ns-gqdda ` OpsRequest milvus-udrdxe-restart-dzgn9 created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-restart-dzgn9 -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-restart-dzgn9 ns-gqdda Restart milvus-udrdxe etcd,minio,milvus Creating -/- Sep 01,2025 11:47 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 500m / 500m 512Mi / 512Mi data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-restart-dzgn9 ns-gqdda Restart milvus-udrdxe etcd,minio,milvus Failed 3/3 Sep 01,2025 11:47 UTC+0800 check ops status done check opsrequest progress ops_status:milvus-udrdxe-restart-dzgn9 ns-gqdda Restart milvus-udrdxe etcd,minio,milvus Failed 3/3 Sep 01,2025 11:47 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-restart-dzgn9 --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-restart-dzgn9 patched `kbcli cluster delete-ops --name milvus-udrdxe-restart-dzgn9 --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-restart-dzgn9 deleted cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale milvus-udrdxe --auto-approve --force=true --components milvus --cpu 600m --memory 0.6Gi --namespace ns-gqdda ` OpsRequest milvus-udrdxe-verticalscaling-qgrvx created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-verticalscaling-qgrvx -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-verticalscaling-qgrvx ns-gqdda VerticalScaling milvus-udrdxe milvus Running -/- Sep 01,2025 11:49 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 600m / 600m 644245094400m / 644245094400m data:5Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:50 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-verticalscaling-qgrvx ns-gqdda VerticalScaling milvus-udrdxe milvus Succeed 1/1 Sep 01,2025 11:49 UTC+0800 check ops status done ops_status:milvus-udrdxe-verticalscaling-qgrvx ns-gqdda VerticalScaling milvus-udrdxe milvus Succeed 1/1 Sep 01,2025 11:49 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-verticalscaling-qgrvx --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-verticalscaling-qgrvx patched `kbcli cluster delete-ops --name milvus-udrdxe-verticalscaling-qgrvx --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-verticalscaling-qgrvx deleted No resources found in ns-gqdda namespace. check db_client batch data count `echo "curl -s -H 'Content-Type: application/json' -X POST http://milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local:19530/v1/vector/query -d '***\"collectionName\":\"executions_loop_collection\",\"filter\":\"id == 22314\",\"limit\":0,\"outputFields\":[\"id\"]***' " | kubectl exec -it milvus-udrdxe-milvus-0 --namespace ns-gqdda -- bash` check db_client batch data Success `kubectl get pvc -l app.kubernetes.io/instance=milvus-udrdxe,apps.kubeblocks.io/component-name=milvus,apps.kubeblocks.io/vct-name=data --namespace ns-gqdda ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in milvus-udrdxe namespace. `kbcli cluster volume-expand milvus-udrdxe --auto-approve --force=true --components milvus --volume-claim-templates data --storage 8Gi --namespace ns-gqdda ` OpsRequest milvus-udrdxe-volumeexpansion-8fctz created successfully, you can view the progress: kbcli cluster describe-ops milvus-udrdxe-volumeexpansion-8fctz -n ns-gqdda check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-volumeexpansion-8fctz ns-gqdda VolumeExpansion milvus-udrdxe milvus Creating -/- Sep 01,2025 11:52 UTC+0800 check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Updating Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 600m / 600m 644245094400m / 644245094400m data:8Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:50 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 No resources found in milvus-udrdxe namespace. check ops status `kbcli cluster list-ops milvus-udrdxe --status all --namespace ns-gqdda ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME milvus-udrdxe-volumeexpansion-8fctz ns-gqdda VolumeExpansion milvus-udrdxe milvus Succeed 1/1 Sep 01,2025 11:52 UTC+0800 check ops status done ops_status:milvus-udrdxe-volumeexpansion-8fctz ns-gqdda VolumeExpansion milvus-udrdxe milvus Succeed 1/1 Sep 01,2025 11:52 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests milvus-udrdxe-volumeexpansion-8fctz --namespace ns-gqdda ` opsrequest.apps.kubeblocks.io/milvus-udrdxe-volumeexpansion-8fctz patched `kbcli cluster delete-ops --name milvus-udrdxe-volumeexpansion-8fctz --force --auto-approve --namespace ns-gqdda ` OpsRequest milvus-udrdxe-volumeexpansion-8fctz deleted No resources found in ns-gqdda namespace. check db_client batch data count `echo "curl -s -H 'Content-Type: application/json' -X POST http://milvus-udrdxe-proxy.ns-gqdda.svc.cluster.local:19530/v1/vector/query -d '***\"collectionName\":\"executions_loop_collection\",\"filter\":\"id == 22314\",\"limit\":0,\"outputFields\":[\"id\"]***' " | kubectl exec -it milvus-udrdxe-milvus-0 --namespace ns-gqdda -- bash` check db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update milvus-udrdxe --termination-policy=WipeOut --namespace ns-gqdda ` cluster.apps.kubeblocks.io/milvus-udrdxe updated (no change) check cluster status `kbcli cluster list milvus-udrdxe --show-labels --namespace ns-gqdda ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS milvus-udrdxe ns-gqdda milvus WipeOut Running Sep 01,2025 11:29 UTC+0800 app.kubernetes.io/instance=milvus-udrdxe,clusterdefinition.kubeblocks.io/name=milvus,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances milvus-udrdxe --namespace ns-gqdda ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME milvus-udrdxe-etcd-0 ns-gqdda milvus-udrdxe etcd Running leader 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 milvus-udrdxe-milvus-0 ns-gqdda milvus-udrdxe milvus Running 0 600m / 600m 644245094400m / 644245094400m data:8Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:50 UTC+0800 milvus-udrdxe-minio-0 ns-gqdda milvus-udrdxe minio Running 0 600m / 600m 644245094400m / 644245094400m data:10Gi aks-cicdamdpool-15164480-vmss000005/10.224.0.6 Sep 01,2025 11:47 UTC+0800 check pod status done connect unsupported engine type: milvus milvus-standalone-0.9.1 cluster list-logs `kbcli cluster list-logs milvus-udrdxe --namespace ns-gqdda ` No log files found. You can enable the log feature with the kbcli command below. kbcli cluster update milvus-udrdxe --enable-all-logs=true --namespace ns-gqdda Error from server (NotFound): pods "milvus-udrdxe-milvus-0" not found cluster logs `kbcli cluster logs milvus-udrdxe --tail 30 --namespace ns-gqdda ` Defaulted container "etcd" out of: etcd, lorry, inject-bash (init), init-lorry (init) ***"level":"info","ts":"2025-09-01T03:47:52.011089Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft f1850263fae5c2fe [peers: [], term: 4, commit: 456, applied: 0, lastindex: 456, lastterm: 4]"*** ***"level":"warn","ts":"2025-09-01T03:47:52.022181Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"*** ***"level":"info","ts":"2025-09-01T03:47:52.028130Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":385*** ***"level":"info","ts":"2025-09-01T03:47:52.038976Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"*** ***"level":"info","ts":"2025-09-01T03:47:52.039046Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"f1850263fae5c2fe","local-server-version":"3.6.1","cluster-id":"ebdbeecc765029ad","cluster-version":"3.6"*** ***"level":"info","ts":"2025-09-01T03:47:52.039132Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f1850263fae5c2fe","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"*** ***"level":"info","ts":"2025-09-01T03:47:52.039230Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/run/etcd/default.etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"*** ***"level":"info","ts":"2025-09-01T03:47:52.039290Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/run/etcd/default.etcd/member/snap","suffix":"snap","max":5,"interval":"30s"*** ***"level":"info","ts":"2025-09-01T03:47:52.039299Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/run/etcd/default.etcd/member/wal","suffix":"wal","max":5,"interval":"30s"*** ***"level":"info","ts":"2025-09-01T03:47:52.039320Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"[::]:2380"*** ***"level":"info","ts":"2025-09-01T03:47:52.039375Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"[::]:2380"*** ***"level":"info","ts":"2025-09-01T03:47:52.039392Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f1850263fae5c2fe switched to configuration voters=(17403318963477529342)"*** ***"level":"info","ts":"2025-09-01T03:47:52.039341Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f1850263fae5c2fe","initial-advertise-peer-urls":["http://milvus-udrdxe-etcd-0.milvus-udrdxe-etcd-headless.ns-gqdda.svc.cluster.local:2380"],"listen-peer-urls":["http://0.0.0.0:2380"],"advertise-client-urls":["http://milvus-udrdxe-etcd-0.milvus-udrdxe-etcd-headless.ns-gqdda.svc.cluster.local:2379"],"listen-client-urls":["http://0.0.0.0:2379"],"listen-metrics-urls":[]*** ***"level":"info","ts":"2025-09-01T03:47:52.039455Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"ebdbeecc765029ad","local-member-id":"f1850263fae5c2fe","added-peer-id":"f1850263fae5c2fe","added-peer-peer-urls":["http://milvus-udrdxe-etcd-0.milvus-udrdxe-etcd-headless.ns-gqdda.svc.cluster.local:2380"],"added-peer-is-learner":false*** ***"level":"info","ts":"2025-09-01T03:47:52.039565Z","caller":"membership/cluster.go:673","msg":"updated cluster version","cluster-id":"ebdbeecc765029ad","local-member-id":"f1850263fae5c2fe","from":"3.6","to":"3.6"*** ***"level":"info","ts":"2025-09-01T03:47:52.911975Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f1850263fae5c2fe is starting a new election at term 4"*** ***"level":"info","ts":"2025-09-01T03:47:52.912026Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f1850263fae5c2fe became pre-candidate at term 4"*** ***"level":"info","ts":"2025-09-01T03:47:52.912089Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f1850263fae5c2fe received MsgPreVoteResp from f1850263fae5c2fe at term 4"*** ***"level":"info","ts":"2025-09-01T03:47:52.912104Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f1850263fae5c2fe has received 1 MsgPreVoteResp votes and 0 vote rejections"*** ***"level":"info","ts":"2025-09-01T03:47:52.912123Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f1850263fae5c2fe became candidate at term 5"*** ***"level":"info","ts":"2025-09-01T03:47:52.917351Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f1850263fae5c2fe received MsgVoteResp from f1850263fae5c2fe at term 5"*** ***"level":"info","ts":"2025-09-01T03:47:52.917368Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f1850263fae5c2fe has received 1 MsgVoteResp votes and 0 vote rejections"*** ***"level":"info","ts":"2025-09-01T03:47:52.917382Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f1850263fae5c2fe became leader at term 5"*** ***"level":"info","ts":"2025-09-01T03:47:52.917390Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f1850263fae5c2fe elected leader f1850263fae5c2fe at term 5"*** ***"level":"info","ts":"2025-09-01T03:47:52.921386Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"f1850263fae5c2fe","local-member-attributes":"***Name:milvus-udrdxe-etcd-0 ClientURLs:[http://milvus-udrdxe-etcd-0.milvus-udrdxe-etcd-headless.ns-gqdda.svc.cluster.local:2379]***","cluster-id":"ebdbeecc765029ad","publish-timeout":"7s"*** ***"level":"info","ts":"2025-09-01T03:47:52.921435Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"*** ***"level":"info","ts":"2025-09-01T03:47:52.921536Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"*** ***"level":"info","ts":"2025-09-01T03:47:52.921575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"*** ***"level":"info","ts":"2025-09-01T03:47:52.921924Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"*** ***"level":"info","ts":"2025-09-01T03:47:52.923189Z","caller":"embed/serve.go:220","msg":"serving client traffic insecurely; this is strongly discouraged!","traffic":"grpc+http","address":"[::]:2379"*** delete cluster milvus-udrdxe `kbcli cluster delete milvus-udrdxe --auto-approve --namespace ns-gqdda ` Cluster milvus-udrdxe deleted Error from server (NotFound): secrets "milvus-udrdxe-s3-credential" not found Error from server (NotFound): secrets "milvus-udrdxe-s3-credential" not found Error from server (NotFound): secrets "milvus-udrdxe-s3-credential" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "milvus-udrdxe-minio-service" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "milvus-udrdxe-minio-service" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "milvus-udrdxe-minio-service" not found pod_info:milvus-udrdxe-etcd-0 2/2 Running 0 9m58s milvus-udrdxe-milvus-0 1/1 Terminating 0 7m20s milvus-udrdxe-minio-0 1/1 Running 0 10m pod_info:milvus-udrdxe-etcd-0 2/2 Running 0 10m milvus-udrdxe-milvus-0 0/1 Terminating 0 7m41s milvus-udrdxe-minio-0 1/1 Running 0 10m No resources found in ns-gqdda namespace. delete cluster pod done No resources found in ns-gqdda namespace. check cluster resource non-exist OK: pvc No resources found in ns-gqdda namespace. delete cluster done No resources found in ns-gqdda namespace. No resources found in ns-gqdda namespace. No resources found in ns-gqdda namespace. Milvus Test Suite All Done! delete cluster etcdm-udrdxe `kbcli cluster delete etcdm-udrdxe --auto-approve --namespace ns-gqdda ` Cluster etcdm-udrdxe deleted Error from server (NotFound): secrets "etcdm-udrdxe-s3-credential" not found Error from server (NotFound): secrets "etcdm-udrdxe-s3-credential" not found Error from server (NotFound): secrets "etcdm-udrdxe-s3-credential" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "etcdm-udrdxe-minio-service" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "etcdm-udrdxe-minio-service" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "etcdm-udrdxe-minio-service" not found No resources found in ns-gqdda namespace. delete cluster pod done No resources found in ns-gqdda namespace. check cluster resource non-exist OK: pvc No resources found in ns-gqdda namespace. delete cluster done No resources found in ns-gqdda namespace. No resources found in ns-gqdda namespace. No resources found in ns-gqdda namespace. delete cluster kafkam-udrdxe `kbcli cluster delete kafkam-udrdxe --auto-approve --namespace ns-gqdda ` Cluster kafkam-udrdxe deleted Error from server (NotFound): secrets "kafkam-udrdxe-s3-credential" not found Error from server (NotFound): secrets "kafkam-udrdxe-s3-credential" not found Error from server (NotFound): secrets "kafkam-udrdxe-s3-credential" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "kafkam-udrdxe-minio-service" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "kafkam-udrdxe-minio-service" not found Error from server (NotFound): servicedescriptors.apps.kubeblocks.io "kafkam-udrdxe-minio-service" not found No resources found in ns-gqdda namespace. delete cluster pod done No resources found in ns-gqdda namespace. check cluster resource non-exist OK: pvc No resources found in ns-gqdda namespace. delete cluster done No resources found in ns-gqdda namespace. No resources found in ns-gqdda namespace. No resources found in ns-gqdda namespace. [PASSED]|[Create]|[ClusterDefinition=etcd;]|[Description=Create a cluster with the specified cluster definition etcd] [PASSED]|[Create]|[ClusterDefinition=kafka;]|[Description=Create a cluster with the specified cluster definition kafka] --------------------------------------Milvus (Topology = standalone Replicas 1) Test Result-------------------------------------- [PASSED]|[Create]|[ClusterDefinition=milvus;]|[Description=Create a cluster with the specified cluster definition milvus] [PASSED]|[VerticalScaling]|[ComponentName=etcd]|[Description=VerticalScaling the cluster specify component etcd] [PASSED]|[Restart]|[ComponentName=milvus]|[Description=Restart the cluster specify component milvus] [PASSED]|[VerticalScaling]|[ComponentName=minio]|[Description=VerticalScaling the cluster specify component minio] [PASSED]|[VolumeExpansion]|[ComponentName=etcd,minio]|[Description=VolumeExpansion the cluster specify component etcd,minio] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[No-Failover]|[HA=Connection Stress;ComponentName=milvus]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [WARNING]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[VerticalScaling]|[ComponentName=milvus]|[Description=VerticalScaling the cluster specify component milvus] [PASSED]|[VolumeExpansion]|[ComponentName=milvus]|[Description=VolumeExpansion the cluster specify component milvus] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]