bash test/kbcli/test_kbcli_0.9.sh --type 12 --version 0.9.5 --generate-output true --chaos-mesh true --drain-node true --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-rqwpo ` `kubectl create namespace ns-rqwpo` namespace/ns-rqwpo created create namespace ns-rqwpo done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "0.9" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v0.9.5-beta.8` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 32.1M 100 32.1M 0 0 171M 0 --:--:-- --:--:-- --:--:-- 171M kbcli installed successfully. Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v0.9.5-beta.8 done Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Kubernetes Env: v1.32.6 POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default kubeblocks version is:0.9.5 skip upgrade kubeblocks Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check cluster definition set component name:graphd set component version set component version:nebula set service versions:v3.5.0,v3.8.0 set service versions sorted:v3.5.0,v3.8.0 unsupported component definition REPORT_COUNT 0:0 set replicas first:2,v3.5.0|2,v3.8.0 set replicas third:2,v3.8.0 set replicas fourth:2,v3.8.0 set minimum cmpv service version set minimum cmpv service version replicas:2,v3.8.0 REPORT_COUNT:1 CLUSTER_TOPOLOGY: set cluster topology: default LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 1 No resources found in ns-rqwpo namespace. termination_policy:WipeOut create 2 replica WipeOut nebula cluster check cluster version check cluster definition apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: nebula-vrphvy namespace: ns-rqwpo spec: clusterDefinitionRef: nebula topology: default terminationPolicy: WipeOut componentSpecs: - name: graphd serviceVersion: v3.8.0 serviceAccountName: kb-nebula-vrphvy replicas: 2 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: metad serviceVersion: v3.8.0 serviceAccountName: kb-nebula-vrphvy replicas: 3 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: storaged serviceVersion: v3.8.0 serviceAccountName: kb-nebula-vrphvy replicas: 3 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi `kubectl apply -f test_create_nebula-vrphvy.yaml` cluster.apps.kubeblocks.io/nebula-vrphvy created apply test_create_nebula-vrphvy.yaml Success `rm -rf test_create_nebula-vrphvy.yaml` check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Sep 01,2025 11:18 UTC+0800 clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:26 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check pod nebula-vrphvy-graphd-0 container_name graphd exist password 35c3P@334@TD9H@7 check pod nebula-vrphvy-graphd-0 container_name agent exist password 35c3P@334@TD9H@7 check pod nebula-vrphvy-graphd-0 container_name exporter exist password 35c3P@334@TD9H@7 check pod nebula-vrphvy-graphd-0 container_name lorry exist password 35c3P@334@TD9H@7 check pod nebula-vrphvy-graphd-0 container_name config-manager exist password 35c3P@334@TD9H@7 No container logs contain secret password. describe cluster `kbcli cluster describe nebula-vrphvy --namespace ns-rqwpo ` Name: nebula-vrphvy Created Time: Sep 01,2025 11:18 UTC+0800 NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY ns-rqwpo nebula Running WipeOut Endpoints: COMPONENT MODE INTERNAL EXTERNAL graphd ReadWrite nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local:9669 nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local:19669 nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local:19670 Topology: COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME graphd nebula-vrphvy-graphd-0 Running 0 aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 graphd nebula-vrphvy-graphd-1 Running 0 aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 metad nebula-vrphvy-metad-0 Running 0 aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 metad nebula-vrphvy-metad-1 Running 0 aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 metad nebula-vrphvy-metad-2 Running 0 aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 11:18 UTC+0800 storaged nebula-vrphvy-storaged-0 Running 0 aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 storaged nebula-vrphvy-storaged-1 Running 0 aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 storaged nebula-vrphvy-storaged-2 Running 0 aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:26 UTC+0800 Resources Allocation: COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS graphd false 100m / 100m 512Mi / 512Mi logs:1Gi default metad false 100m / 100m 512Mi / 512Mi data:1Gi default logs:1Gi default storaged false 100m / 100m 512Mi / 512Mi data:1Gi default logs:1Gi default Images: COMPONENT TYPE IMAGE graphd docker.io/apecloud/nebula-graphd:v3.8.0 metad docker.io/apecloud/nebula-metad:v3.8.0 storaged docker.io/apecloud/nebula-storaged:v3.8.0 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-rqwpo nebula-vrphvy `kbcli cluster label nebula-vrphvy app.kubernetes.io/instance- --namespace ns-rqwpo ` label "app.kubernetes.io/instance" not found. `kbcli cluster label nebula-vrphvy app.kubernetes.io/instance=nebula-vrphvy --namespace ns-rqwpo ` `kbcli cluster label nebula-vrphvy --list --namespace ns-rqwpo ` NAME NAMESPACE LABELS nebula-vrphvy ns-rqwpo app.kubernetes.io/instance=nebula-vrphvy clusterdefinition.kubeblocks.io/name=nebula clusterversion.kubeblocks.io/name= label cluster app.kubernetes.io/instance=nebula-vrphvy Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=nebula-vrphvy --namespace ns-rqwpo ` `kbcli cluster label nebula-vrphvy --list --namespace ns-rqwpo ` NAME NAMESPACE LABELS nebula-vrphvy ns-rqwpo app.kubernetes.io/instance=nebula-vrphvy case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=nebula clusterversion.kubeblocks.io/name= label cluster case.name=kbcli.test1 Success `kbcli cluster label nebula-vrphvy case.name=kbcli.test2 --overwrite --namespace ns-rqwpo ` `kbcli cluster label nebula-vrphvy --list --namespace ns-rqwpo ` NAME NAMESPACE LABELS nebula-vrphvy ns-rqwpo app.kubernetes.io/instance=nebula-vrphvy case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=nebula clusterversion.kubeblocks.io/name= label cluster case.name=kbcli.test2 Success `kbcli cluster label nebula-vrphvy case.name- --namespace ns-rqwpo ` `kbcli cluster label nebula-vrphvy --list --namespace ns-rqwpo ` NAME NAMESPACE LABELS nebula-vrphvy ns-rqwpo app.kubernetes.io/instance=nebula-vrphvy clusterdefinition.kubeblocks.io/name=nebula clusterversion.kubeblocks.io/name= delete cluster label case.name Success cluster connect `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default `echo "echo \"SHOW HOSTS;\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` Defaulted container "storaged" out of: storaged, agent, exporter, lorry, config-manager, init-console (init), init-agent (init), init-lorry (init) Unable to use a TTY - input is not a terminal or the right kind of file Welcome! (root@nebula) [(none)]> +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ | Host | Port | Status | Leader count | Leader distribution | Partition distribution | Version | +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ | "nebula-vrphvy-storaged-0.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.8.0" | | "nebula-vrphvy-storaged-1.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.8.0" | | "nebula-vrphvy-storaged-2.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.8.0" | +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ Got 3 rows (time spent 1.033ms/1.633702ms) Mon, 01 Sep 2025 03:27:31 UTC (root@nebula) [(none)]> Bye root! Mon, 01 Sep 2025 03:27:31 UTC connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-nebula-vrphvy" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-nebula-vrphvy --namespace ns-rqwpo ` Error from server (NotFound): pods "test-db-client-executionloop-nebula-vrphvy" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-nebula-vrphvy" not found `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-nebula-vrphvy namespace: ns-rqwpo spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local" - "--user" - "root" - "--password" - "35c3P@334@TD9H@7" - "--port" - "9669" - "--dbtype" - "nebula" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-nebula-vrphvy.yaml` pod/test-db-client-executionloop-nebula-vrphvy created apply test-db-client-executionloop-nebula-vrphvy.yaml Success `rm -rf test-db-client-executionloop-nebula-vrphvy.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 5s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 9s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 14s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 19s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 24s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 29s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 35s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 40s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 45s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 50s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 55s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 60s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 65s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 71s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 76s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 1/1 Running 0 81s check pod test-db-client-executionloop-nebula-vrphvy status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-nebula-vrphvy 0/1 Completed 0 86s check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:26 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done --host nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password 35c3P@334@TD9H@7 --port 9669 --dbtype nebula --test executionloop --duration 60 --interval 1 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] 03:27:34.446 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. CREATE SPACE default Successfully Execution loop start: Drop tag executions_loop_table successful. Create tag executions_loop_table successful. Tag exists and metadata retrieved successfully. Execution loop start: INSERT VERTEX executions_loop_table(name, age) VALUES "vertex_1":("person_1", 81); [ 1s ] executions total: 1 successful: 1 failed: 0 disconnect: 0 [ 2s ] executions total: 124 successful: 124 failed: 0 disconnect: 0 [ 3s ] executions total: 272 successful: 272 failed: 0 disconnect: 0 [ 4s ] executions total: 410 successful: 410 failed: 0 disconnect: 0 [ 5s ] executions total: 549 successful: 549 failed: 0 disconnect: 0 [ 6s ] executions total: 675 successful: 675 failed: 0 disconnect: 0 [ 7s ] executions total: 812 successful: 812 failed: 0 disconnect: 0 [ 8s ] executions total: 937 successful: 937 failed: 0 disconnect: 0 [ 9s ] executions total: 1087 successful: 1087 failed: 0 disconnect: 0 [ 10s ] executions total: 1232 successful: 1232 failed: 0 disconnect: 0 [ 11s ] executions total: 1374 successful: 1374 failed: 0 disconnect: 0 [ 12s ] executions total: 1520 successful: 1520 failed: 0 disconnect: 0 [ 13s ] executions total: 1659 successful: 1659 failed: 0 disconnect: 0 [ 14s ] executions total: 1801 successful: 1801 failed: 0 disconnect: 0 [ 15s ] executions total: 1948 successful: 1948 failed: 0 disconnect: 0 [ 16s ] executions total: 2090 successful: 2090 failed: 0 disconnect: 0 [ 17s ] executions total: 2235 successful: 2235 failed: 0 disconnect: 0 [ 18s ] executions total: 2384 successful: 2384 failed: 0 disconnect: 0 [ 19s ] executions total: 2524 successful: 2524 failed: 0 disconnect: 0 [ 20s ] executions total: 2652 successful: 2652 failed: 0 disconnect: 0 [ 21s ] executions total: 2791 successful: 2791 failed: 0 disconnect: 0 [ 22s ] executions total: 2921 successful: 2921 failed: 0 disconnect: 0 [ 23s ] executions total: 3060 successful: 3060 failed: 0 disconnect: 0 [ 24s ] executions total: 3197 successful: 3197 failed: 0 disconnect: 0 [ 25s ] executions total: 3325 successful: 3325 failed: 0 disconnect: 0 [ 26s ] executions total: 3457 successful: 3457 failed: 0 disconnect: 0 [ 27s ] executions total: 3588 successful: 3588 failed: 0 disconnect: 0 [ 28s ] executions total: 3719 successful: 3719 failed: 0 disconnect: 0 [ 29s ] executions total: 3848 successful: 3848 failed: 0 disconnect: 0 [ 30s ] executions total: 3983 successful: 3983 failed: 0 disconnect: 0 [ 31s ] executions total: 4114 successful: 4114 failed: 0 disconnect: 0 [ 32s ] executions total: 4239 successful: 4239 failed: 0 disconnect: 0 [ 33s ] executions total: 4376 successful: 4376 failed: 0 disconnect: 0 [ 34s ] executions total: 4517 successful: 4517 failed: 0 disconnect: 0 [ 35s ] executions total: 4661 successful: 4661 failed: 0 disconnect: 0 [ 36s ] executions total: 4804 successful: 4804 failed: 0 disconnect: 0 [ 37s ] executions total: 4948 successful: 4948 failed: 0 disconnect: 0 [ 38s ] executions total: 5086 successful: 5086 failed: 0 disconnect: 0 [ 39s ] executions total: 5227 successful: 5227 failed: 0 disconnect: 0 [ 40s ] executions total: 5368 successful: 5368 failed: 0 disconnect: 0 [ 41s ] executions total: 5507 successful: 5507 failed: 0 disconnect: 0 [ 42s ] executions total: 5644 successful: 5644 failed: 0 disconnect: 0 [ 43s ] executions total: 5784 successful: 5784 failed: 0 disconnect: 0 [ 44s ] executions total: 5916 successful: 5916 failed: 0 disconnect: 0 [ 45s ] executions total: 6052 successful: 6052 failed: 0 disconnect: 0 [ 46s ] executions total: 6196 successful: 6196 failed: 0 disconnect: 0 [ 47s ] executions total: 6335 successful: 6335 failed: 0 disconnect: 0 [ 48s ] executions total: 6482 successful: 6482 failed: 0 disconnect: 0 [ 49s ] executions total: 6617 successful: 6617 failed: 0 disconnect: 0 [ 50s ] executions total: 6748 successful: 6748 failed: 0 disconnect: 0 [ 51s ] executions total: 6893 successful: 6893 failed: 0 disconnect: 0 [ 52s ] executions total: 7025 successful: 7025 failed: 0 disconnect: 0 [ 53s ] executions total: 7168 successful: 7168 failed: 0 disconnect: 0 [ 54s ] executions total: 7310 successful: 7310 failed: 0 disconnect: 0 [ 55s ] executions total: 7445 successful: 7445 failed: 0 disconnect: 0 [ 56s ] executions total: 7588 successful: 7588 failed: 0 disconnect: 0 [ 57s ] executions total: 7720 successful: 7720 failed: 0 disconnect: 0 [ 58s ] executions total: 7855 successful: 7855 failed: 0 disconnect: 0 [ 59s ] executions total: 7994 successful: 7994 failed: 0 disconnect: 0 [ 60s ] executions total: 8131 successful: 8131 failed: 0 disconnect: 0 [ 60s ] executions total: 8253 successful: 8253 failed: 0 disconnect: 0 Test Result: Total Executions: 8253 Successful Executions: 8253 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: nebula Host: nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local Port: 9669 Database: Table: User: root Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 8253 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-nebula-vrphvy --namespace ns-rqwpo ` pod/test-db-client-executionloop-nebula-vrphvy patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-nebula-vrphvy" force deleted test failover connectionstress check cluster status before cluster-failover-connectionstress check cluster status done cluster_status:Running check node drain check node drain success Error from server (NotFound): pods "test-db-client-connectionstress-nebula-vrphvy" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-nebula-vrphvy --namespace ns-rqwpo ` Error from server (NotFound): pods "test-db-client-connectionstress-nebula-vrphvy" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-nebula-vrphvy" not found `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-nebula-vrphvy namespace: ns-rqwpo spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local" - "--user" - "root" - "--password" - "35c3P@334@TD9H@7" - "--port" - "9669" - "--database" - "default" - "--dbtype" - "nebula" - "--test" - "connectionstress" - "--connections" - "300" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-nebula-vrphvy.yaml` pod/test-db-client-connectionstress-nebula-vrphvy created apply test-db-client-connectionstress-nebula-vrphvy.yaml Success `rm -rf test-db-client-connectionstress-nebula-vrphvy.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-vrphvy 1/1 Running 0 5s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-vrphvy 1/1 Running 0 9s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-vrphvy 1/1 Running 0 14s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-vrphvy 1/1 Running 0 19s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-vrphvy 1/1 Running 0 25s check pod test-db-client-connectionstress-nebula-vrphvy status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-vrphvy 0/1 Completed 0 30s check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:26 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 03:29:11.630 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:29:11.631 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. CREATE SPACE default Successfully 03:29:21.649 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:29:21.650 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:33) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 03:29:21.652 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:29:21.652 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. CREATE SPACE default Successfully 03:29:31.762 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:29:31.762 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:33) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 03:29:31.764 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:29:31.765 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:143) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:64) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:33) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) 03:29:31.789 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:29:31.789 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:33) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.3.16 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more Releasing connections... Test Result: null Connection Information: Database Type: nebula Host: nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local Port: 9669 Database: default Table: User: root Org: Access Mode: mysql Test Type: connectionstress Connection Count: 300 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-nebula-vrphvy --namespace ns-rqwpo ` pod/test-db-client-connectionstress-nebula-vrphvy patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-nebula-vrphvy" force deleted check failover pod name failover pod name:nebula-vrphvy-graphd-0 failover connectionstress Success `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success check component metad exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=metad --namespace ns-rqwpo | (grep "metad" || true )` check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=storaged --namespace ns-rqwpo | (grep "storaged" || true )` `kubectl get pvc -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=metad,storaged,apps.kubeblocks.io/vct-name=data --namespace ns-rqwpo ` No resources found in ns-rqwpo namespace. nebula-vrphvy metad,storaged data pvc is empty cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-vrphvy namespace. `kbcli cluster volume-expand nebula-vrphvy --auto-approve --force=true --components metad,storaged --volume-claim-templates data --storage 2Gi --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-volumeexpansion-zgm8b created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-volumeexpansion-zgm8b -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-volumeexpansion-zgm8b ns-rqwpo VolumeExpansion nebula-vrphvy metad,storaged Pending -/- Sep 01,2025 11:29 UTC+0800 ops_status:nebula-vrphvy-volumeexpansion-zgm8b ns-rqwpo VolumeExpansion nebula-vrphvy metad,storaged Creating -/- Sep 01,2025 11:29 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:21 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 11:18 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:26 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:26 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done No resources found in nebula-vrphvy namespace. check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-volumeexpansion-zgm8b ns-rqwpo VolumeExpansion nebula-vrphvy metad,storaged Succeed 6/6 Sep 01,2025 11:29 UTC+0800 check ops status done ops_status:nebula-vrphvy-volumeexpansion-zgm8b ns-rqwpo VolumeExpansion nebula-vrphvy metad,storaged Succeed 6/6 Sep 01,2025 11:29 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-volumeexpansion-zgm8b --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-volumeexpansion-zgm8b patched `kbcli cluster delete-ops --name nebula-vrphvy-volumeexpansion-zgm8b --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-volumeexpansion-zgm8b deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop nebula-vrphvy --auto-approve --force=true --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-stop-g9ps6 created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-stop-g9ps6 -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-stop-g9ps6 ns-rqwpo Stop nebula-vrphvy Pending -/- Sep 01,2025 11:36 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Stopped Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-stop-g9ps6 ns-rqwpo Stop nebula-vrphvy graphd,metad,storaged Succeed 8/8 Sep 01,2025 11:36 UTC+0800 check ops status done ops_status:nebula-vrphvy-stop-g9ps6 ns-rqwpo Stop nebula-vrphvy graphd,metad,storaged Succeed 8/8 Sep 01,2025 11:36 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-stop-g9ps6 --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-stop-g9ps6 patched `kbcli cluster delete-ops --name nebula-vrphvy-stop-g9ps6 --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-stop-g9ps6 deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start nebula-vrphvy --force=true --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-start-49f6j created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-start-49f6j -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-start-49f6j ns-rqwpo Start nebula-vrphvy Pending -/- Sep 01,2025 11:36 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-start-49f6j ns-rqwpo Start nebula-vrphvy graphd,metad,storaged Succeed 8/8 Sep 01,2025 11:36 UTC+0800 check ops status done ops_status:nebula-vrphvy-start-49f6j ns-rqwpo Start nebula-vrphvy graphd,metad,storaged Succeed 8/8 Sep 01,2025 11:36 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-start-49f6j --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-start-49f6j patched `kbcli cluster delete-ops --name nebula-vrphvy-start-49f6j --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-start-49f6j deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success check component metad exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=metad --namespace ns-rqwpo | (grep "metad" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart nebula-vrphvy --auto-approve --force=true --components metad --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-restart-vctb9 created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-restart-vctb9 -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Creating -/- Sep 01,2025 11:41 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating [Error] check cluster status timeout --------------------------------------get cluster nebula-vrphvy yaml-------------------------------------- `kubectl get cluster nebula-vrphvy -o yaml --namespace ns-rqwpo ` apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: annotations: kubeblocks.io/ops-request: '[***"name":"nebula-vrphvy-restart-vctb9","type":"Restart"***]' kubeblocks.io/reconcile: "2025-09-01T03:40:35.96943609Z" kubectl.kubernetes.io/last-applied-configuration: | ***"apiVersion":"apps.kubeblocks.io/v1alpha1","kind":"Cluster","metadata":***"annotations":***,"name":"nebula-vrphvy","namespace":"ns-rqwpo"***,"spec":***"clusterDefinitionRef":"nebula","componentSpecs":[***"name":"graphd","replicas":2,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***,***"name":"metad","replicas":3,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"data","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***,***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***,***"name":"storaged","replicas":3,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"data","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***,***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***],"terminationPolicy":"WipeOut","topology":"default"*** creationTimestamp: "2025-09-01T03:18:14Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 6 labels: app.kubernetes.io/instance: nebula-vrphvy clusterdefinition.kubeblocks.io/name: nebula clusterversion.kubeblocks.io/name: "" name: nebula-vrphvy namespace: ns-rqwpo resourceVersion: "35503" uid: ca25ef5e-9c03-4792-b928-9f8a1dcdac6b spec: clusterDefinitionRef: nebula componentSpecs: - componentDef: nebula-graphd name: graphd replicas: 2 resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - componentDef: nebula-metad name: metad replicas: 3 resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - componentDef: nebula-storaged name: storaged replicas: 3 resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi resources: cpu: "0" memory: "0" storage: size: "0" terminationPolicy: WipeOut topology: default status: clusterDefGeneration: 2 components: graphd: phase: Running podsReady: true podsReadyTime: "2025-09-01T03:40:41Z" metad: phase: Updating podsReady: false podsReadyTime: "2025-09-01T03:40:53Z" storaged: message: InstanceSet/nebula-vrphvy-storaged: '["nebula-vrphvy-storaged-0"]' phase: Running podsReady: true podsReadyTime: "2025-09-01T03:41:03Z" conditions: - lastTransitionTime: "2025-09-01T03:18:14Z" message: 'The operator has started the provisioning of Cluster: nebula-vrphvy' observedGeneration: 6 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2025-09-01T03:18:14Z" message: Successfully applied for resources observedGeneration: 6 reason: ApplyResourcesSucceed status: "True" type: ApplyResources - lastTransitionTime: "2025-09-01T03:41:11Z" message: 'pods are not ready in Components: [metad], refer to related component message in Cluster.status.components' reason: ReplicasNotReady status: "False" type: ReplicasReady - lastTransitionTime: "2025-09-01T03:41:11Z" message: 'pods are unavailable in Components: [metad], refer to related component message in Cluster.status.components' reason: ComponentsNotReady status: "False" type: Ready observedGeneration: 6 phase: Updating ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster nebula-vrphvy-------------------------------------- `kubectl describe cluster nebula-vrphvy --namespace ns-rqwpo ` Name: nebula-vrphvy Namespace: ns-rqwpo Labels: app.kubernetes.io/instance=nebula-vrphvy clusterdefinition.kubeblocks.io/name=nebula clusterversion.kubeblocks.io/name= Annotations: kubeblocks.io/ops-request: [***"name":"nebula-vrphvy-restart-vctb9","type":"Restart"***] kubeblocks.io/reconcile: 2025-09-01T03:40:35.96943609Z API Version: apps.kubeblocks.io/v1alpha1 Kind: Cluster Metadata: Creation Timestamp: 2025-09-01T03:18:14Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 6 Resource Version: 35503 UID: ca25ef5e-9c03-4792-b928-9f8a1dcdac6b Spec: Cluster Definition Ref: nebula Component Specs: Component Def: nebula-graphd Name: graphd Replicas: 2 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Component Def: nebula-metad Name: metad Replicas: 3 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 2Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Component Def: nebula-storaged Name: storaged Replicas: 3 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 2Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Resources: Cpu: 0 Memory: 0 Storage: Size: 0 Termination Policy: WipeOut Topology: default Status: Cluster Def Generation: 2 Components: Graphd: Phase: Running Pods Ready: true Pods Ready Time: 2025-09-01T03:40:41Z Metad: Phase: Updating Pods Ready: false Pods Ready Time: 2025-09-01T03:40:53Z Storaged: Message: InstanceSet/nebula-vrphvy-storaged: ["nebula-vrphvy-storaged-0"] Phase: Running Pods Ready: true Pods Ready Time: 2025-09-01T03:41:03Z Conditions: Last Transition Time: 2025-09-01T03:18:14Z Message: The operator has started the provisioning of Cluster: nebula-vrphvy Observed Generation: 6 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2025-09-01T03:18:14Z Message: Successfully applied for resources Observed Generation: 6 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Last Transition Time: 2025-09-01T03:41:11Z Message: pods are not ready in Components: [metad], refer to related component message in Cluster.status.components Reason: ReplicasNotReady Status: False Type: ReplicasReady Last Transition Time: 2025-09-01T03:41:11Z Message: pods are unavailable in Components: [metad], refer to related component message in Cluster.status.components Reason: ComponentsNotReady Status: False Type: Ready Observed Generation: 6 Phase: Updating Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 27m (x18 over 28m) event-controller Pod nebula-vrphvy-metad-2: Readiness probe failed: Get "http://10.244.0.161:19559/status": dial tcp 10.244.0.161:19559: connect: connection refused Warning ReplicasNotReady 26m cluster-controller pods are not ready in Components: [graphd], refer to related component message in Cluster.status.components Warning ComponentsNotReady 26m cluster-controller pods are unavailable in Components: [graphd], refer to related component message in Cluster.status.components Normal ComponentPhaseTransition 22m (x3 over 30m) cluster-controller component is Creating Warning ReplicasNotReady 22m cluster-controller pods are not ready in Components: [storaged], refer to related component message in Cluster.status.components Warning ComponentsNotReady 22m cluster-controller pods are unavailable in Components: [storaged], refer to related component message in Cluster.status.components Normal Running 21m cluster-controller Cluster: nebula-vrphvy is ready, current phase is Running Normal AllReplicasReady 21m cluster-controller all pods of components are ready, waiting for the probe detection successful Normal ClusterReady 21m cluster-controller Cluster: nebula-vrphvy is ready, current phase is Running Normal ApplyResourcesSucceed 18m (x4 over 30m) cluster-controller Successfully applied for resources Normal PreCheckSucceed 18m (x4 over 30m) cluster-controller The operator has started the provisioning of Cluster: nebula-vrphvy Normal ComponentPhaseTransition 18m (x2 over 18m) cluster-controller component is Updating Warning ReplicasNotReady 18m cluster-controller pods are not ready in Components: [metad], refer to related component message in Cluster.status.components Warning ComponentsNotReady 18m cluster-controller pods are unavailable in Components: [metad], refer to related component message in Cluster.status.components Warning ReplicasNotReady 18m cluster-controller pods are not ready in Components: [metad storaged], refer to related component message in Cluster.status.components Warning ComponentsNotReady 18m cluster-controller pods are unavailable in Components: [metad storaged], refer to related component message in Cluster.status.components Normal ComponentPhaseTransition 14m (x4 over 26m) cluster-controller component is Running Normal HorizontalScale 12m component-controller start horizontal scale component storaged of cluster nebula-vrphvy from 0 to 3 Normal HorizontalScale 12m component-controller start horizontal scale component metad of cluster nebula-vrphvy from 0 to 3 Normal HorizontalScale 12m component-controller start horizontal scale component graphd of cluster nebula-vrphvy from 0 to 2 Normal ComponentPhaseTransition 9m45s cluster-controller component is Failed Warning FailedAttachVolume 7m58s event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-355e6ee7-6489-4527-b469-90731bcbb9c7" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-355e6ee7-6489-4527-b469-90731bcbb9c7 Warning FailedAttachVolume 7m58s event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b Warning FailedAttachVolume 7m58s event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a Warning FailedAttachVolume 7m57s event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8 ------------------------------------------------------------------------------------------------------------------ check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Init:0/1 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:44 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:41 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 logs:1Gi pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 pod_status:Init:0/1 check pod status done check cluster status again check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Running 1/3 Sep 01,2025 11:41 UTC+0800 ops_status:nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Running 1/3 Sep 01,2025 11:41 UTC+0800 ops_status:nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Running 1/3 Sep 01,2025 11:41 UTC+0800 ops_status:nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Running 1/3 Sep 01,2025 11:41 UTC+0800 ops_status:nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Running 2/3 Sep 01,2025 11:41 UTC+0800 ops_status:nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Running 2/3 Sep 01,2025 11:41 UTC+0800 ops_status:nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Running 2/3 Sep 01,2025 11:41 UTC+0800 ops_status:nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Running 2/3 Sep 01,2025 11:41 UTC+0800 check ops status done ops_status:nebula-vrphvy-restart-vctb9 ns-rqwpo Restart nebula-vrphvy metad Succeed 3/3 Sep 01,2025 11:41 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-restart-vctb9 --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-restart-vctb9 patched `kbcli cluster delete-ops --name nebula-vrphvy-restart-vctb9 --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-restart-vctb9 deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=storaged --namespace ns-rqwpo | (grep "storaged" || true )` cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-vrphvy namespace. `kbcli cluster hscale nebula-vrphvy --auto-approve --force=true --components storaged --replicas 4 --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-horizontalscaling-lb4h8 created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-horizontalscaling-lb4h8 -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-horizontalscaling-lb4h8 ns-rqwpo HorizontalScaling nebula-vrphvy storaged Creating -/- Sep 01,2025 11:52 UTC+0800 ops_status:nebula-vrphvy-horizontalscaling-lb4h8 ns-rqwpo HorizontalScaling nebula-vrphvy storaged Creating -/- Sep 01,2025 11:52 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:51 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:44 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:41 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-3 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:52 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done No resources found in nebula-vrphvy namespace. check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-horizontalscaling-lb4h8 ns-rqwpo HorizontalScaling nebula-vrphvy storaged Succeed 1/1 Sep 01,2025 11:52 UTC+0800 check ops status done ops_status:nebula-vrphvy-horizontalscaling-lb4h8 ns-rqwpo HorizontalScaling nebula-vrphvy storaged Succeed 1/1 Sep 01,2025 11:52 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-horizontalscaling-lb4h8 --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-horizontalscaling-lb4h8 patched `kbcli cluster delete-ops --name nebula-vrphvy-horizontalscaling-lb4h8 --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-horizontalscaling-lb4h8 deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=storaged --namespace ns-rqwpo | (grep "storaged" || true )` cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-vrphvy namespace. `kbcli cluster hscale nebula-vrphvy --auto-approve --force=true --components storaged --replicas 3 --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-horizontalscaling-n7qj7 created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-horizontalscaling-n7qj7 -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-horizontalscaling-n7qj7 ns-rqwpo HorizontalScaling nebula-vrphvy storaged Pending -/- Sep 01,2025 11:53 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:51 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:44 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:41 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000004/10.224.0.6 Sep 01,2025 11:36 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done No resources found in nebula-vrphvy namespace. check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-horizontalscaling-n7qj7 ns-rqwpo HorizontalScaling nebula-vrphvy storaged Succeed 1/1 Sep 01,2025 11:53 UTC+0800 check ops status done ops_status:nebula-vrphvy-horizontalscaling-n7qj7 ns-rqwpo HorizontalScaling nebula-vrphvy storaged Succeed 1/1 Sep 01,2025 11:53 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-horizontalscaling-n7qj7 --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-horizontalscaling-n7qj7 patched `kbcli cluster delete-ops --name nebula-vrphvy-horizontalscaling-n7qj7 --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-horizontalscaling-n7qj7 deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=storaged --namespace ns-rqwpo | (grep "storaged" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale nebula-vrphvy --auto-approve --force=true --components storaged --cpu 200m --memory 0.6Gi --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-verticalscaling-6xnnd created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-verticalscaling-6xnnd -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-verticalscaling-6xnnd ns-rqwpo VerticalScaling nebula-vrphvy storaged Creating -/- Sep 01,2025 11:53 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating [Error] check cluster status timeout --------------------------------------get cluster nebula-vrphvy yaml-------------------------------------- `kubectl get cluster nebula-vrphvy -o yaml --namespace ns-rqwpo ` apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: annotations: kubeblocks.io/ops-request: '[***"name":"nebula-vrphvy-verticalscaling-6xnnd","type":"VerticalScaling"***]' kubeblocks.io/reconcile: "2025-09-01T04:00:47.914013305Z" kubectl.kubernetes.io/last-applied-configuration: | ***"apiVersion":"apps.kubeblocks.io/v1alpha1","kind":"Cluster","metadata":***"annotations":***,"name":"nebula-vrphvy","namespace":"ns-rqwpo"***,"spec":***"clusterDefinitionRef":"nebula","componentSpecs":[***"name":"graphd","replicas":2,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***,***"name":"metad","replicas":3,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"data","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***,***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***,***"name":"storaged","replicas":3,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"data","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***,***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***],"terminationPolicy":"WipeOut","topology":"default"*** creationTimestamp: "2025-09-01T03:18:14Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 9 labels: app.kubernetes.io/instance: nebula-vrphvy clusterdefinition.kubeblocks.io/name: nebula clusterversion.kubeblocks.io/name: "" name: nebula-vrphvy namespace: ns-rqwpo resourceVersion: "51227" uid: ca25ef5e-9c03-4792-b928-9f8a1dcdac6b spec: clusterDefinitionRef: nebula componentSpecs: - componentDef: nebula-graphd name: graphd replicas: 2 resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - componentDef: nebula-metad name: metad replicas: 3 resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - componentDef: nebula-storaged name: storaged replicas: 3 resources: limits: cpu: 200m memory: 644245094400m requests: cpu: 200m memory: 644245094400m serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi resources: cpu: "0" memory: "0" storage: size: "0" terminationPolicy: WipeOut topology: default status: clusterDefGeneration: 2 components: graphd: phase: Running podsReady: true podsReadyTime: "2025-09-01T03:40:41Z" metad: phase: Running podsReady: true podsReadyTime: "2025-09-01T04:00:34Z" storaged: message: InstanceSet/nebula-vrphvy-storaged: '["nebula-vrphvy-storaged-0"]' phase: Updating podsReady: false podsReadyTime: "2025-09-01T03:53:42Z" conditions: - lastTransitionTime: "2025-09-01T03:18:14Z" message: 'The operator has started the provisioning of Cluster: nebula-vrphvy' observedGeneration: 9 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2025-09-01T03:18:14Z" message: Successfully applied for resources observedGeneration: 9 reason: ApplyResourcesSucceed status: "True" type: ApplyResources - lastTransitionTime: "2025-09-01T03:53:52Z" message: 'pods are not ready in Components: [storaged], refer to related component message in Cluster.status.components' reason: ReplicasNotReady status: "False" type: ReplicasReady - lastTransitionTime: "2025-09-01T03:53:52Z" message: 'pods are unavailable in Components: [storaged], refer to related component message in Cluster.status.components' reason: ComponentsNotReady status: "False" type: Ready observedGeneration: 9 phase: Updating ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster nebula-vrphvy-------------------------------------- `kubectl describe cluster nebula-vrphvy --namespace ns-rqwpo ` Name: nebula-vrphvy Namespace: ns-rqwpo Labels: app.kubernetes.io/instance=nebula-vrphvy clusterdefinition.kubeblocks.io/name=nebula clusterversion.kubeblocks.io/name= Annotations: kubeblocks.io/ops-request: [***"name":"nebula-vrphvy-verticalscaling-6xnnd","type":"VerticalScaling"***] kubeblocks.io/reconcile: 2025-09-01T04:00:47.914013305Z API Version: apps.kubeblocks.io/v1alpha1 Kind: Cluster Metadata: Creation Timestamp: 2025-09-01T03:18:14Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 9 Resource Version: 51227 UID: ca25ef5e-9c03-4792-b928-9f8a1dcdac6b Spec: Cluster Definition Ref: nebula Component Specs: Component Def: nebula-graphd Name: graphd Replicas: 2 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Component Def: nebula-metad Name: metad Replicas: 3 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 2Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Component Def: nebula-storaged Name: storaged Replicas: 3 Resources: Limits: Cpu: 200m Memory: 644245094400m Requests: Cpu: 200m Memory: 644245094400m Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 2Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Resources: Cpu: 0 Memory: 0 Storage: Size: 0 Termination Policy: WipeOut Topology: default Status: Cluster Def Generation: 2 Components: Graphd: Phase: Running Pods Ready: true Pods Ready Time: 2025-09-01T03:40:41Z Metad: Phase: Running Pods Ready: true Pods Ready Time: 2025-09-01T04:00:34Z Storaged: Message: InstanceSet/nebula-vrphvy-storaged: ["nebula-vrphvy-storaged-0"] Phase: Updating Pods Ready: false Pods Ready Time: 2025-09-01T03:53:42Z Conditions: Last Transition Time: 2025-09-01T03:18:14Z Message: The operator has started the provisioning of Cluster: nebula-vrphvy Observed Generation: 9 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2025-09-01T03:18:14Z Message: Successfully applied for resources Observed Generation: 9 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Last Transition Time: 2025-09-01T03:53:52Z Message: pods are not ready in Components: [storaged], refer to related component message in Cluster.status.components Reason: ReplicasNotReady Status: False Type: ReplicasReady Last Transition Time: 2025-09-01T03:53:52Z Message: pods are unavailable in Components: [storaged], refer to related component message in Cluster.status.components Reason: ComponentsNotReady Status: False Type: Ready Observed Generation: 9 Phase: Updating Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 39m (x18 over 41m) event-controller Pod nebula-vrphvy-metad-2: Readiness probe failed: Get "http://10.244.0.161:19559/status": dial tcp 10.244.0.161:19559: connect: connection refused Warning ReplicasNotReady 39m cluster-controller pods are not ready in Components: [graphd], refer to related component message in Cluster.status.components Warning ComponentsNotReady 39m cluster-controller pods are unavailable in Components: [graphd], refer to related component message in Cluster.status.components Warning ComponentsNotReady 34m cluster-controller pods are unavailable in Components: [storaged], refer to related component message in Cluster.status.components Normal ComponentPhaseTransition 34m (x3 over 42m) cluster-controller component is Creating Warning ReplicasNotReady 34m cluster-controller pods are not ready in Components: [storaged], refer to related component message in Cluster.status.components Normal Running 33m cluster-controller Cluster: nebula-vrphvy is ready, current phase is Running Normal ClusterReady 33m cluster-controller Cluster: nebula-vrphvy is ready, current phase is Running Normal PreCheckSucceed 31m (x4 over 43m) cluster-controller The operator has started the provisioning of Cluster: nebula-vrphvy Normal ApplyResourcesSucceed 31m (x4 over 43m) cluster-controller Successfully applied for resources Normal ComponentPhaseTransition 31m (x2 over 31m) cluster-controller component is Updating Warning ReplicasNotReady 31m cluster-controller pods are not ready in Components: [metad], refer to related component message in Cluster.status.components Warning ComponentsNotReady 31m cluster-controller pods are unavailable in Components: [metad], refer to related component message in Cluster.status.components Warning ReplicasNotReady 31m cluster-controller pods are not ready in Components: [metad storaged], refer to related component message in Cluster.status.components Warning ComponentsNotReady 31m cluster-controller pods are unavailable in Components: [metad storaged], refer to related component message in Cluster.status.components Normal HorizontalScale 24m component-controller start horizontal scale component graphd of cluster nebula-vrphvy from 0 to 2 Normal HorizontalScale 24m component-controller start horizontal scale component storaged of cluster nebula-vrphvy from 0 to 3 Normal HorizontalScale 24m component-controller start horizontal scale component metad of cluster nebula-vrphvy from 0 to 3 Normal ComponentPhaseTransition 22m cluster-controller component is Failed Warning FailedAttachVolume 20m event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-355e6ee7-6489-4527-b469-90731bcbb9c7" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-355e6ee7-6489-4527-b469-90731bcbb9c7 Warning FailedAttachVolume 20m event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b Warning FailedAttachVolume 20m event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a Warning FailedAttachVolume 20m event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8 Normal AllReplicasReady 9m15s (x4 over 33m) cluster-controller all pods of components are ready, waiting for the probe detection successful Normal HorizontalScale 9m7s (x2 over 9m8s) component-controller start horizontal scale component storaged of cluster nebula-vrphvy from 3 to 4 Normal ComponentPhaseTransition 7m52s (x10 over 39m) cluster-controller component is Running Normal HorizontalScale 7m35s component-controller start horizontal scale component storaged of cluster nebula-vrphvy from 4 to 3 Warning FailedAttachVolume 5m32s event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-355e6ee7-6489-4527-b469-90731bcbb9c7" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-355e6ee7-6489-4527-b469-90731bcbb9c7 Warning FailedAttachVolume 5m31s event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b Warning Unhealthy 5m31s event-controller Pod nebula-vrphvy-metad-2: Readiness probe failed: Get "http://10.244.0.161:19559/status": dial tcp 10.244.0.161:19559: connect: connection refused Warning FailedAttachVolume 5m31s event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8 Warning FailedAttachVolume 5m30s event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a Normal ComponentPhaseTransition 5m28s cluster-controller component is Updating Warning ReplicasNotReady 5m28s cluster-controller pods are not ready in Components: [metad storaged], refer to related component message in Cluster.status.components Warning ComponentsNotReady 5m28s cluster-controller pods are unavailable in Components: [metad storaged], refer to related component message in Cluster.status.components Normal ComponentPhaseTransition 40s cluster-controller component is Running Warning ReplicasNotReady 40s cluster-controller pods are not ready in Components: [storaged], refer to related component message in Cluster.status.components Warning ComponentsNotReady 40s cluster-controller pods are unavailable in Components: [storaged], refer to related component message in Cluster.status.components Warning FailedAttachVolume 27s (x2 over 60s) event-controller Pod nebula-vrphvy-storaged-1: AttachVolume.Attach failed for volume "pvc-35115436-223f-48a9-b5ba-6cdb51af4be5" : rpc error: code = Internal desc = Attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-35115436-223f-48a9-b5ba-6cdb51af4be5 to instance aks-cicdamdpool-25950949-vmss000007 failed with disk(/subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-35115436-223f-48a9-b5ba-6cdb51af4be5) already attached to node(/subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/virtualMachineScaleSets/aks-cicdamdpool-25950949-vmss/virtualMachines/aks-cicdamdpool-25950949-vmss_4), could not be attached to node(aks-cicdamdpool-25950949-vmss000007) Warning FailedAttachVolume 27s (x2 over 59s) event-controller Pod nebula-vrphvy-storaged-1: AttachVolume.Attach failed for volume "pvc-d854a97b-e1e4-410c-bf6f-a27889fa9e37" : rpc error: code = Internal desc = Attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d854a97b-e1e4-410c-bf6f-a27889fa9e37 to instance aks-cicdamdpool-25950949-vmss000007 failed with disk(/subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d854a97b-e1e4-410c-bf6f-a27889fa9e37) already attached to node(/subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/virtualMachineScaleSets/aks-cicdamdpool-25950949-vmss/virtualMachines/aks-cicdamdpool-25950949-vmss_4), could not be attached to node(aks-cicdamdpool-25950949-vmss000007) ------------------------------------------------------------------------------------------------------------------ check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:44 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:41 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Init:0/3 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:53 UTC+0800 logs:1Gi pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:0/3 pod_status:Init:1/3 pod_status:Init:1/3 check pod status done check cluster status again check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-verticalscaling-6xnnd ns-rqwpo VerticalScaling nebula-vrphvy storaged Running 2/3 Sep 01,2025 11:53 UTC+0800 ops_status:nebula-vrphvy-verticalscaling-6xnnd ns-rqwpo VerticalScaling nebula-vrphvy storaged Running 2/3 Sep 01,2025 11:53 UTC+0800 ops_status:nebula-vrphvy-verticalscaling-6xnnd ns-rqwpo VerticalScaling nebula-vrphvy storaged Running 2/3 Sep 01,2025 11:53 UTC+0800 ops_status:nebula-vrphvy-verticalscaling-6xnnd ns-rqwpo VerticalScaling nebula-vrphvy storaged Running 2/3 Sep 01,2025 11:53 UTC+0800 ops_status:nebula-vrphvy-verticalscaling-6xnnd ns-rqwpo VerticalScaling nebula-vrphvy storaged Running 3/3 Sep 01,2025 11:53 UTC+0800 check ops status done ops_status:nebula-vrphvy-verticalscaling-6xnnd ns-rqwpo VerticalScaling nebula-vrphvy storaged Succeed 3/3 Sep 01,2025 11:53 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-verticalscaling-6xnnd --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-verticalscaling-6xnnd patched `kbcli cluster delete-ops --name nebula-vrphvy-verticalscaling-6xnnd --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-verticalscaling-6xnnd deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-vrphvy namespace. `kbcli cluster hscale nebula-vrphvy --auto-approve --force=true --components graphd --replicas 3 --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-horizontalscaling-fscx5 created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-horizontalscaling-fscx5 -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-horizontalscaling-fscx5 ns-rqwpo HorizontalScaling nebula-vrphvy graphd Pending -/- Sep 01,2025 12:02 UTC+0800 ops_status:nebula-vrphvy-horizontalscaling-fscx5 ns-rqwpo HorizontalScaling nebula-vrphvy graphd Creating -/- Sep 01,2025 12:02 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-graphd-2 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:02 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:44 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:41 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:53 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done No resources found in nebula-vrphvy namespace. check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-horizontalscaling-fscx5 ns-rqwpo HorizontalScaling nebula-vrphvy graphd Succeed 1/1 Sep 01,2025 12:02 UTC+0800 check ops status done ops_status:nebula-vrphvy-horizontalscaling-fscx5 ns-rqwpo HorizontalScaling nebula-vrphvy graphd Succeed 1/1 Sep 01,2025 12:02 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-horizontalscaling-fscx5 --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-horizontalscaling-fscx5 patched `kbcli cluster delete-ops --name nebula-vrphvy-horizontalscaling-fscx5 --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-horizontalscaling-fscx5 deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-vrphvy namespace. `kbcli cluster hscale nebula-vrphvy --auto-approve --force=true --components graphd --replicas 2 --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-horizontalscaling-jf6j5 created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-horizontalscaling-jf6j5 -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-horizontalscaling-jf6j5 ns-rqwpo HorizontalScaling nebula-vrphvy graphd Creating -/- Sep 01,2025 12:03 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:36 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:44 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:41 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 11:53 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done No resources found in nebula-vrphvy namespace. check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-horizontalscaling-jf6j5 ns-rqwpo HorizontalScaling nebula-vrphvy graphd Succeed 1/1 Sep 01,2025 12:03 UTC+0800 check ops status done ops_status:nebula-vrphvy-horizontalscaling-jf6j5 ns-rqwpo HorizontalScaling nebula-vrphvy graphd Succeed 1/1 Sep 01,2025 12:03 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-horizontalscaling-jf6j5 --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-horizontalscaling-jf6j5 patched `kbcli cluster delete-ops --name nebula-vrphvy-horizontalscaling-jf6j5 --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-horizontalscaling-jf6j5 deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart nebula-vrphvy --auto-approve --force=true --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-restart-sd9jw created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-restart-sd9jw -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Creating -/- Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Creating -/- Sep 01,2025 12:03 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating [Error] check cluster status timeout --------------------------------------get cluster nebula-vrphvy yaml-------------------------------------- `kubectl get cluster nebula-vrphvy -o yaml --namespace ns-rqwpo ` apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: annotations: kubeblocks.io/ops-request: '[***"name":"nebula-vrphvy-restart-sd9jw","type":"Restart"***]' kubeblocks.io/reconcile: "2025-09-01T04:00:47.914013305Z" kubectl.kubernetes.io/last-applied-configuration: | ***"apiVersion":"apps.kubeblocks.io/v1alpha1","kind":"Cluster","metadata":***"annotations":***,"name":"nebula-vrphvy","namespace":"ns-rqwpo"***,"spec":***"clusterDefinitionRef":"nebula","componentSpecs":[***"name":"graphd","replicas":2,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***,***"name":"metad","replicas":3,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"data","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***,***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***,***"name":"storaged","replicas":3,"resources":***"limits":***"cpu":"100m","memory":"0.5Gi"***,"requests":***"cpu":"100m","memory":"0.5Gi"***,"serviceAccountName":"kb-nebula-vrphvy","serviceVersion":"v3.8.0","volumeClaimTemplates":[***"name":"data","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***,***"name":"logs","spec":***"accessModes":["ReadWriteOnce"],"resources":***"requests":***"storage":"1Gi"***,"storageClassName":null***]***],"terminationPolicy":"WipeOut","topology":"default"*** creationTimestamp: "2025-09-01T03:18:14Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 11 labels: app.kubernetes.io/instance: nebula-vrphvy clusterdefinition.kubeblocks.io/name: nebula clusterversion.kubeblocks.io/name: "" name: nebula-vrphvy namespace: ns-rqwpo resourceVersion: "56581" uid: ca25ef5e-9c03-4792-b928-9f8a1dcdac6b spec: clusterDefinitionRef: nebula componentSpecs: - componentDef: nebula-graphd name: graphd replicas: 2 resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - componentDef: nebula-metad name: metad replicas: 3 resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - componentDef: nebula-storaged name: storaged replicas: 3 resources: limits: cpu: 200m memory: 644245094400m requests: cpu: 200m memory: 644245094400m serviceAccountName: kb-nebula-vrphvy serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi resources: cpu: "0" memory: "0" storage: size: "0" terminationPolicy: WipeOut topology: default status: clusterDefGeneration: 2 components: graphd: phase: Running podsReady: true podsReadyTime: "2025-09-01T04:06:56Z" metad: phase: Updating podsReady: false podsReadyTime: "2025-09-01T04:00:34Z" storaged: message: InstanceSet/nebula-vrphvy-storaged: '["nebula-vrphvy-storaged-0"]' phase: Updating podsReady: false podsReadyTime: "2025-09-01T04:02:32Z" conditions: - lastTransitionTime: "2025-09-01T03:18:14Z" message: 'The operator has started the provisioning of Cluster: nebula-vrphvy' observedGeneration: 11 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2025-09-01T03:18:14Z" message: Successfully applied for resources observedGeneration: 11 reason: ApplyResourcesSucceed status: "True" type: ApplyResources - lastTransitionTime: "2025-09-01T04:03:51Z" message: 'pods are not ready in Components: [metad storaged], refer to related component message in Cluster.status.components' reason: ReplicasNotReady status: "False" type: ReplicasReady - lastTransitionTime: "2025-09-01T04:03:51Z" message: 'pods are unavailable in Components: [metad storaged], refer to related component message in Cluster.status.components' reason: ComponentsNotReady status: "False" type: Ready observedGeneration: 11 phase: Updating ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster nebula-vrphvy-------------------------------------- `kubectl describe cluster nebula-vrphvy --namespace ns-rqwpo ` Name: nebula-vrphvy Namespace: ns-rqwpo Labels: app.kubernetes.io/instance=nebula-vrphvy clusterdefinition.kubeblocks.io/name=nebula clusterversion.kubeblocks.io/name= Annotations: kubeblocks.io/ops-request: [***"name":"nebula-vrphvy-restart-sd9jw","type":"Restart"***] kubeblocks.io/reconcile: 2025-09-01T04:00:47.914013305Z API Version: apps.kubeblocks.io/v1alpha1 Kind: Cluster Metadata: Creation Timestamp: 2025-09-01T03:18:14Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 11 Resource Version: 56581 UID: ca25ef5e-9c03-4792-b928-9f8a1dcdac6b Spec: Cluster Definition Ref: nebula Component Specs: Component Def: nebula-graphd Name: graphd Replicas: 2 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Component Def: nebula-metad Name: metad Replicas: 3 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 2Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Component Def: nebula-storaged Name: storaged Replicas: 3 Resources: Limits: Cpu: 200m Memory: 644245094400m Requests: Cpu: 200m Memory: 644245094400m Service Account Name: kb-nebula-vrphvy Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 2Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Resources: Cpu: 0 Memory: 0 Storage: Size: 0 Termination Policy: WipeOut Topology: default Status: Cluster Def Generation: 2 Components: Graphd: Phase: Running Pods Ready: true Pods Ready Time: 2025-09-01T04:06:56Z Metad: Phase: Updating Pods Ready: false Pods Ready Time: 2025-09-01T04:00:34Z Storaged: Message: InstanceSet/nebula-vrphvy-storaged: ["nebula-vrphvy-storaged-0"] Phase: Updating Pods Ready: false Pods Ready Time: 2025-09-01T04:02:32Z Conditions: Last Transition Time: 2025-09-01T03:18:14Z Message: The operator has started the provisioning of Cluster: nebula-vrphvy Observed Generation: 11 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2025-09-01T03:18:14Z Message: Successfully applied for resources Observed Generation: 11 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Last Transition Time: 2025-09-01T04:03:51Z Message: pods are not ready in Components: [metad storaged], refer to related component message in Cluster.status.components Reason: ReplicasNotReady Status: False Type: ReplicasReady Last Transition Time: 2025-09-01T04:03:51Z Message: pods are unavailable in Components: [metad storaged], refer to related component message in Cluster.status.components Reason: ComponentsNotReady Status: False Type: Ready Observed Generation: 11 Phase: Updating Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 50m (x18 over 51m) event-controller Pod nebula-vrphvy-metad-2: Readiness probe failed: Get "http://10.244.0.161:19559/status": dial tcp 10.244.0.161:19559: connect: connection refused Warning ComponentsNotReady 49m cluster-controller pods are unavailable in Components: [graphd], refer to related component message in Cluster.status.components Warning ReplicasNotReady 49m cluster-controller pods are not ready in Components: [graphd], refer to related component message in Cluster.status.components Warning ComponentsNotReady 44m cluster-controller pods are unavailable in Components: [storaged], refer to related component message in Cluster.status.components Normal ComponentPhaseTransition 44m (x3 over 53m) cluster-controller component is Creating Warning ReplicasNotReady 44m cluster-controller pods are not ready in Components: [storaged], refer to related component message in Cluster.status.components Normal ClusterReady 44m cluster-controller Cluster: nebula-vrphvy is ready, current phase is Running Normal Running 44m cluster-controller Cluster: nebula-vrphvy is ready, current phase is Running Normal ApplyResourcesSucceed 41m (x4 over 53m) cluster-controller Successfully applied for resources Normal PreCheckSucceed 41m (x4 over 53m) cluster-controller The operator has started the provisioning of Cluster: nebula-vrphvy Warning ComponentsNotReady 41m cluster-controller pods are unavailable in Components: [metad storaged], refer to related component message in Cluster.status.components Normal ComponentPhaseTransition 41m (x2 over 41m) cluster-controller component is Updating Warning ReplicasNotReady 41m cluster-controller pods are not ready in Components: [metad], refer to related component message in Cluster.status.components Warning ComponentsNotReady 41m cluster-controller pods are unavailable in Components: [metad], refer to related component message in Cluster.status.components Warning ReplicasNotReady 41m cluster-controller pods are not ready in Components: [metad storaged], refer to related component message in Cluster.status.components Normal HorizontalScale 34m component-controller start horizontal scale component metad of cluster nebula-vrphvy from 0 to 3 Normal HorizontalScale 34m component-controller start horizontal scale component storaged of cluster nebula-vrphvy from 0 to 3 Normal HorizontalScale 34m component-controller start horizontal scale component graphd of cluster nebula-vrphvy from 0 to 2 Normal ComponentPhaseTransition 32m cluster-controller component is Failed Warning FailedAttachVolume 30m event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b Warning FailedAttachVolume 30m event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a Warning FailedAttachVolume 30m event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-355e6ee7-6489-4527-b469-90731bcbb9c7" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-355e6ee7-6489-4527-b469-90731bcbb9c7 Warning FailedAttachVolume 30m event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8 Normal AllReplicasReady 19m (x4 over 44m) cluster-controller all pods of components are ready, waiting for the probe detection successful Normal HorizontalScale 19m (x2 over 19m) component-controller start horizontal scale component storaged of cluster nebula-vrphvy from 3 to 4 Normal ComponentPhaseTransition 17m (x10 over 49m) cluster-controller component is Running Normal HorizontalScale 17m component-controller start horizontal scale component storaged of cluster nebula-vrphvy from 4 to 3 Warning FailedAttachVolume 15m event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-355e6ee7-6489-4527-b469-90731bcbb9c7" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-355e6ee7-6489-4527-b469-90731bcbb9c7 Warning FailedAttachVolume 15m event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d046afcf-0f62-4431-85e8-6d62545b4c7b Warning FailedAttachVolume 15m event-controller Pod nebula-vrphvy-metad-1: AttachVolume.Attach failed for volume "pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-9b09e3c4-3113-4d7d-ba3f-afd9c03771e8 Warning Unhealthy 15m event-controller Pod nebula-vrphvy-metad-2: Readiness probe failed: Get "http://10.244.0.161:19559/status": dial tcp 10.244.0.161:19559: connect: connection refused Warning FailedAttachVolume 15m event-controller Pod nebula-vrphvy-storaged-2: AttachVolume.Attach failed for volume "pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-6df0635f-1fd5-41e4-acb0-16d97b16362a Warning ReplicasNotReady 15m cluster-controller pods are not ready in Components: [metad storaged], refer to related component message in Cluster.status.components Warning ComponentsNotReady 15m cluster-controller pods are unavailable in Components: [metad storaged], refer to related component message in Cluster.status.components Warning ReplicasNotReady 10m cluster-controller pods are not ready in Components: [storaged], refer to related component message in Cluster.status.components Warning ComponentsNotReady 10m cluster-controller pods are unavailable in Components: [storaged], refer to related component message in Cluster.status.components Warning FailedAttachVolume 10m (x2 over 11m) event-controller Pod nebula-vrphvy-storaged-1: AttachVolume.Attach failed for volume "pvc-35115436-223f-48a9-b5ba-6cdb51af4be5" : rpc error: code = Internal desc = Attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-35115436-223f-48a9-b5ba-6cdb51af4be5 to instance aks-cicdamdpool-25950949-vmss000007 failed with disk(/subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-35115436-223f-48a9-b5ba-6cdb51af4be5) already attached to node(/subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/virtualMachineScaleSets/aks-cicdamdpool-25950949-vmss/virtualMachines/aks-cicdamdpool-25950949-vmss_4), could not be attached to node(aks-cicdamdpool-25950949-vmss000007) Warning FailedAttachVolume 10m (x2 over 11m) event-controller Pod nebula-vrphvy-storaged-1: AttachVolume.Attach failed for volume "pvc-d854a97b-e1e4-410c-bf6f-a27889fa9e37" : rpc error: code = Internal desc = Attach volume /subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d854a97b-e1e4-410c-bf6f-a27889fa9e37 to instance aks-cicdamdpool-25950949-vmss000007 failed with disk(/subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/disks/pvc-d854a97b-e1e4-410c-bf6f-a27889fa9e37) already attached to node(/subscriptions/e659c16c-8ba9-41ab-98a0-d64b0237ba45/resourceGroups/MC_cicd-aks-otzy9s9z-group_cicd-aks-otzy9s9z_eastus/providers/Microsoft.Compute/virtualMachineScaleSets/aks-cicdamdpool-25950949-vmss/virtualMachines/aks-cicdamdpool-25950949-vmss_4), could not be attached to node(aks-cicdamdpool-25950949-vmss000007) Normal HorizontalScale 8m36s (x2 over 8m36s) component-controller start horizontal scale component graphd of cluster nebula-vrphvy from 2 to 3 Normal ClusterReady 7m49s (x2 over 8m44s) cluster-controller Cluster: nebula-vrphvy is ready, current phase is Running Normal Running 7m49s (x2 over 8m44s) cluster-controller Cluster: nebula-vrphvy is ready, current phase is Running Normal PreCheckSucceed 7m42s (x2 over 8m36s) cluster-controller The operator has started the provisioning of Cluster: nebula-vrphvy Normal ApplyResourcesSucceed 7m42s (x2 over 8m36s) cluster-controller Successfully applied for resources Normal HorizontalScale 7m42s component-controller start horizontal scale component graphd of cluster nebula-vrphvy from 3 to 2 Normal ComponentPhaseTransition 7m41s (x3 over 15m) cluster-controller component is Updating Warning ReplicasNotReady 7m41s (x2 over 8m35s) cluster-controller pods are not ready in Components: [graphd], refer to related component message in Cluster.status.components Warning ComponentsNotReady 7m41s (x2 over 8m35s) cluster-controller pods are unavailable in Components: [graphd], refer to related component message in Cluster.status.components Normal AllReplicasReady 7m40s (x3 over 8m44s) cluster-controller all pods of components are ready, waiting for the probe detection successful Normal ComponentPhaseTransition 4m21s (x5 over 10m) cluster-controller component is Running ------------------------------------------------------------------------------------------------------------------ check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:06 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:03 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Init:0/1 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:06 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:03 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 11:55 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Init:0/3 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:06 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:03 UTC+0800 logs:1Gi pod_status:Init:0/1 Init:0/3 pod_status:Init:0/1 Init:0/3 pod_status:Init:0/1 Init:0/3 pod_status:Init:0/1 Init:0/3 pod_status:Init:0/1 Init:0/3 pod_status:Init:0/1 Init:0/3 check pod status done check cluster status again check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 4/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 4/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 4/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 5/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 6/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 6/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 6/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 7/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 7/8 Sep 01,2025 12:03 UTC+0800 ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Running 7/8 Sep 01,2025 12:03 UTC+0800 check ops status done ops_status:nebula-vrphvy-restart-sd9jw ns-rqwpo Restart nebula-vrphvy graphd,metad,storaged Succeed 8/8 Sep 01,2025 12:03 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-restart-sd9jw --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-restart-sd9jw patched `kbcli cluster delete-ops --name nebula-vrphvy-restart-sd9jw --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-restart-sd9jw deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart nebula-vrphvy --auto-approve --force=true --components graphd --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-restart-k5gkr created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-restart-k5gkr -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-restart-k5gkr ns-rqwpo Restart nebula-vrphvy graphd Creating -/- Sep 01,2025 12:12 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:13 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:12 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:11 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:06 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:03 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:12 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:06 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:03 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-restart-k5gkr ns-rqwpo Restart nebula-vrphvy graphd Succeed 2/2 Sep 01,2025 12:12 UTC+0800 check ops status done ops_status:nebula-vrphvy-restart-k5gkr ns-rqwpo Restart nebula-vrphvy graphd Succeed 2/2 Sep 01,2025 12:12 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-restart-k5gkr --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-restart-k5gkr patched `kbcli cluster delete-ops --name nebula-vrphvy-restart-k5gkr --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-restart-k5gkr deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=storaged --namespace ns-rqwpo | (grep "storaged" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart nebula-vrphvy --auto-approve --force=true --components storaged --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-restart-p27w4 created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-restart-p27w4 -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-restart-p27w4 ns-rqwpo Restart nebula-vrphvy storaged Creating -/- Sep 01,2025 12:14 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:13 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:12 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:11 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:06 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:03 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:15 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:15 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:14 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-restart-p27w4 ns-rqwpo Restart nebula-vrphvy storaged Succeed 3/3 Sep 01,2025 12:14 UTC+0800 check ops status done ops_status:nebula-vrphvy-restart-p27w4 ns-rqwpo Restart nebula-vrphvy storaged Succeed 3/3 Sep 01,2025 12:14 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-restart-p27w4 --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-restart-p27w4 patched `kbcli cluster delete-ops --name nebula-vrphvy-restart-p27w4 --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-restart-p27w4 deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale nebula-vrphvy --auto-approve --force=true --components graphd --cpu 200m --memory 0.6Gi --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-verticalscaling-2thjp created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-verticalscaling-2thjp -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-verticalscaling-2thjp ns-rqwpo VerticalScaling nebula-vrphvy graphd Pending -/- Sep 01,2025 12:16 UTC+0800 ops_status:nebula-vrphvy-verticalscaling-2thjp ns-rqwpo VerticalScaling nebula-vrphvy graphd Pending -/- Sep 01,2025 12:16 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:16 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:16 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:11 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:06 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 100m / 100m 512Mi / 512Mi data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:03 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:15 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:15 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:14 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-verticalscaling-2thjp ns-rqwpo VerticalScaling nebula-vrphvy graphd Succeed 2/2 Sep 01,2025 12:16 UTC+0800 check ops status done ops_status:nebula-vrphvy-verticalscaling-2thjp ns-rqwpo VerticalScaling nebula-vrphvy graphd Succeed 2/2 Sep 01,2025 12:16 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-verticalscaling-2thjp --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-verticalscaling-2thjp patched `kbcli cluster delete-ops --name nebula-vrphvy-verticalscaling-2thjp --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-verticalscaling-2thjp deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success check component metad exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=metad --namespace ns-rqwpo | (grep "metad" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale nebula-vrphvy --auto-approve --force=true --components metad --cpu 200m --memory 0.6Gi --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-verticalscaling-9xvzx created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-verticalscaling-9xvzx -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-verticalscaling-9xvzx ns-rqwpo VerticalScaling nebula-vrphvy metad Creating -/- Sep 01,2025 12:17 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:16 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:17 UTC+0800 logs:1Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:1Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:1Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:1Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:17 UTC+0800 logs:1Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:14 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-verticalscaling-9xvzx ns-rqwpo VerticalScaling nebula-vrphvy metad Failed 3/3 Sep 01,2025 12:17 UTC+0800 check ops status done check opsrequest progress ops_status:nebula-vrphvy-verticalscaling-9xvzx ns-rqwpo VerticalScaling nebula-vrphvy metad Failed 3/3 Sep 01,2025 12:17 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-verticalscaling-9xvzx --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-verticalscaling-9xvzx patched `kbcli cluster delete-ops --name nebula-vrphvy-verticalscaling-9xvzx --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-verticalscaling-9xvzx deleted check component graphd exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=graphd --namespace ns-rqwpo | (grep "graphd" || true )` check component metad exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=metad --namespace ns-rqwpo | (grep "metad" || true )` check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=storaged --namespace ns-rqwpo | (grep "storaged" || true )` `kubectl get pvc -l app.kubernetes.io/instance=nebula-vrphvy,apps.kubeblocks.io/component-name=graphd,metad,storaged,apps.kubeblocks.io/vct-name=logs --namespace ns-rqwpo ` No resources found in ns-rqwpo namespace. nebula-vrphvy graphd,metad,storaged logs pvc is empty cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-vrphvy namespace. `kbcli cluster volume-expand nebula-vrphvy --auto-approve --force=true --components graphd,metad,storaged --volume-claim-templates logs --storage 5Gi --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-volumeexpansion-t5ptp created successfully, you can view the progress: kbcli cluster describe-ops nebula-vrphvy-volumeexpansion-t5ptp -n ns-rqwpo check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-volumeexpansion-t5ptp ns-rqwpo VolumeExpansion nebula-vrphvy graphd,metad,storaged Creating -/- Sep 01,2025 12:22 UTC+0800 check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:5Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:16 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:5Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:14 UTC+0800 logs:5Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done No resources found in nebula-vrphvy namespace. check ops status `kbcli cluster list-ops nebula-vrphvy --status all --namespace ns-rqwpo ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-vrphvy-volumeexpansion-t5ptp ns-rqwpo VolumeExpansion nebula-vrphvy graphd,metad,storaged Succeed 8/8 Sep 01,2025 12:22 UTC+0800 check ops status done ops_status:nebula-vrphvy-volumeexpansion-t5ptp ns-rqwpo VolumeExpansion nebula-vrphvy graphd,metad,storaged Succeed 8/8 Sep 01,2025 12:22 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests nebula-vrphvy-volumeexpansion-t5ptp --namespace ns-rqwpo ` opsrequest.apps.kubeblocks.io/nebula-vrphvy-volumeexpansion-t5ptp patched `kbcli cluster delete-ops --name nebula-vrphvy-volumeexpansion-t5ptp --force --auto-approve --namespace ns-rqwpo ` OpsRequest nebula-vrphvy-volumeexpansion-t5ptp deleted `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check db_client batch data count `echo "echo \"use default;MATCH (v:executions_loop_table) RETURN count(DISTINCT v);\" | /usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669 " | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash ` check db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update nebula-vrphvy --termination-policy=WipeOut --namespace ns-rqwpo ` cluster.apps.kubeblocks.io/nebula-vrphvy updated (no change) check cluster status `kbcli cluster list nebula-vrphvy --show-labels --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-vrphvy ns-rqwpo nebula WipeOut Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=nebula-vrphvy,clusterdefinition.kubeblocks.io/name=nebula,clusterversion.kubeblocks.io/name= check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-vrphvy --namespace ns-rqwpo ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-vrphvy-graphd-0 ns-rqwpo nebula-vrphvy graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:5Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:16 UTC+0800 nebula-vrphvy-graphd-1 ns-rqwpo nebula-vrphvy graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:5Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 nebula-vrphvy-metad-0 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-default-26946070-vmss000000/10.224.0.4 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-metad-1 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-metad-2 ns-rqwpo nebula-vrphvy metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-storaged-0 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-storaged-1 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000006/10.224.0.10 Sep 01,2025 12:17 UTC+0800 logs:5Gi nebula-vrphvy-storaged-2 ns-rqwpo nebula-vrphvy storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-25950949-vmss000007/10.224.0.5 Sep 01,2025 12:14 UTC+0800 logs:5Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-vrphvy` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-vrphvy-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:35c3P@334@TD9H@7;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-vrphvy-graphd.ns-rqwpo.svc.cluster.local --user root --password '35c3P@334@TD9H@7' --port 9669" | kubectl exec -it nebula-vrphvy-storaged-0 --namespace ns-rqwpo -- bash` check cluster connect done cluster list-logs `kbcli cluster list-logs nebula-vrphvy --namespace ns-rqwpo ` No log files found. You can enable the log feature with the kbcli command below. kbcli cluster update nebula-vrphvy --enable-all-logs=true --namespace ns-rqwpo Error from server (NotFound): pods "nebula-vrphvy-graphd-0" not found cluster logs `kbcli cluster logs nebula-vrphvy --tail 30 --namespace ns-rqwpo ` Defaulted container "graphd" out of: graphd, agent, exporter, lorry, config-manager, init-console (init), init-agent (init), init-lorry (init) I20250901 04:22:18.255807 54 ThriftClientManager-inl.h:67] resolve "nebula-vrphvy-metad-2.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local":9559 as "10.244.2.89":9559 E20250901 04:22:19.269738 54 ThriftClientManager-inl.h:70] Failed to resolve address for 'nebula-vrphvy-metad-0.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local': Name or service not known (error=-2): Unknown error -2 ==> /usr/local/nebula/logs/nebula-graphd.WARNING <== E20250901 04:22:19.269738 54 ThriftClientManager-inl.h:70] Failed to resolve address for 'nebula-vrphvy-metad-0.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local': Name or service not known (error=-2): Unknown error -2 ==> /usr/local/nebula/logs/nebula-graphd.ERROR <== E20250901 04:22:19.269738 54 ThriftClientManager-inl.h:70] Failed to resolve address for 'nebula-vrphvy-metad-0.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local': Name or service not known (error=-2): Unknown error -2 ==> /usr/local/nebula/logs/nebula-graphd.INFO <== I20250901 04:22:20.274536 54 ThriftClientManager-inl.h:67] resolve "nebula-vrphvy-metad-0.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local":9559 as "10.244.0.215":9559 I20250901 04:22:30.291589 55 ThriftClientManager-inl.h:67] resolve "nebula-vrphvy-metad-0.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local":9559 as "10.244.0.215":9559 I20250901 04:22:30.302011 56 ThriftClientManager-inl.h:67] resolve "nebula-vrphvy-metad-0.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local":9559 as "10.244.0.215":9559 I20250901 04:22:30.312842 52 ThriftClientManager-inl.h:67] resolve "nebula-vrphvy-metad-0.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local":9559 as "10.244.0.215":9559 I20250901 04:22:30.314774 53 ThriftClientManager-inl.h:67] resolve "nebula-vrphvy-metad-0.nebula-vrphvy-metad-headless.ns-rqwpo.svc.cluster.local":9559 as "10.244.0.215":9559 I20250901 04:22:30.315910 62 MetaClient.cpp:3263] Load leader of "nebula-vrphvy-storaged-0.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local":9779 in 1 space I20250901 04:22:30.315951 62 MetaClient.cpp:3263] Load leader of "nebula-vrphvy-storaged-1.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local":9779 in 0 space I20250901 04:22:30.315968 62 MetaClient.cpp:3263] Load leader of "nebula-vrphvy-storaged-2.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local":9779 in 0 space I20250901 04:22:30.315974 62 MetaClient.cpp:3269] Load leader ok I20250901 04:22:34.780964 33 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.2.3]:35048 I20250901 04:22:40.334790 62 MetaClient.cpp:3263] Load leader of "nebula-vrphvy-storaged-0.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local":9779 in 1 space I20250901 04:22:40.334827 62 MetaClient.cpp:3263] Load leader of "nebula-vrphvy-storaged-1.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local":9779 in 0 space I20250901 04:22:40.334832 62 MetaClient.cpp:3263] Load leader of "nebula-vrphvy-storaged-2.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local":9779 in 0 space I20250901 04:22:40.334838 62 MetaClient.cpp:3269] Load leader ok I20250901 04:33:12.632196 32 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.2.3]:42306 I20250901 04:33:14.828676 32 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.2.3]:42332 I20250901 04:33:14.831377 33 SwitchSpaceExecutor.cpp:45] Graph switched to `default', space id: 1 I20250901 04:33:14.832041 53 ThriftClientManager-inl.h:67] resolve "nebula-vrphvy-storaged-0.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local":9779 as "10.244.2.3":9779 I20250901 04:33:14.854113 54 ThriftClientManager-inl.h:67] resolve "nebula-vrphvy-storaged-0.nebula-vrphvy-storaged-headless.ns-rqwpo.svc.cluster.local":9779 as "10.244.2.3":9779 I20250901 04:33:22.521009 32 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.2.3]:44886 delete cluster nebula-vrphvy `kbcli cluster delete nebula-vrphvy --auto-approve --namespace ns-rqwpo ` Cluster nebula-vrphvy deleted pod_info:nebula-vrphvy-graphd-0 5/5 Running 0 16m nebula-vrphvy-graphd-1 5/5 Running 0 15m nebula-vrphvy-metad-0 4/4 Running 0 15m nebula-vrphvy-metad-1 4/4 Running 2 (11m ago) 15m nebula-vrphvy-metad-2 4/4 Running 2 (11m ago) 16m nebula-vrphvy-storaged-0 5/5 Running 0 15m nebula-vrphvy-storaged-1 5/5 Running 0 15m nebula-vrphvy-storaged-2 5/5 Running 0 18m No resources found in ns-rqwpo namespace. delete cluster pod done No resources found in ns-rqwpo namespace. check cluster resource non-exist OK: pvc No resources found in ns-rqwpo namespace. delete cluster done No resources found in ns-rqwpo namespace. No resources found in ns-rqwpo namespace. No resources found in ns-rqwpo namespace. Nebula Test Suite All Done! --------------------------------------Nebula (Topology = default Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[ClusterDefinition=nebula;ClusterVersion=nebula-v3.8.0;]|[Description=Create a cluster with the specified cluster definition nebula and cluster version nebula-v3.8.0] [PASSED]|[Connect]|[ComponentName=graphd]|[Description=Connect to the cluster] [PASSED]|[No-Failover]|[HA=Connection Stress;ComponentName=graphd]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[VolumeExpansion]|[ComponentName=metad;ComponentVolume=data]|[Description=VolumeExpansion the cluster specify component metad and volume data] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[Restart]|[ComponentName=metad]|[Description=Restart the cluster specify component metad] [PASSED]|[HorizontalScaling Out]|[ComponentName=storaged]|[Description=HorizontalScaling Out the cluster specify component storaged] [PASSED]|[HorizontalScaling In]|[ComponentName=storaged]|[Description=HorizontalScaling In the cluster specify component storaged] [PASSED]|[VerticalScaling]|[ComponentName=storaged]|[Description=VerticalScaling the cluster specify component storaged] [PASSED]|[HorizontalScaling Out]|[ComponentName=graphd]|[Description=HorizontalScaling Out the cluster specify component graphd] [PASSED]|[HorizontalScaling In]|[ComponentName=graphd]|[Description=HorizontalScaling In the cluster specify component graphd] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[Restart]|[ComponentName=graphd]|[Description=Restart the cluster specify component graphd] [PASSED]|[Restart]|[ComponentName=storaged]|[Description=Restart the cluster specify component storaged] [PASSED]|[VerticalScaling]|[ComponentName=graphd]|[Description=VerticalScaling the cluster specify component graphd] [WARNING]|[VerticalScaling]|[ComponentName=metad]|[Description=VerticalScaling the cluster specify component metad] [PASSED]|[VolumeExpansion]|[ComponentName=graphd,metad,storaged;ComponentVolume=logs]|[Description=VolumeExpansion the cluster specify component graphd,metad,storaged and volume logs] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]