https://github.com/apecloud/apecloud-cd/actions/runs/21930219260 previous_version: kubeblocks_version:1.0.2 bash test/kbcli/test_kbcli_1.0.sh --type 12 --version 1.0.2 --service-version v3.8 --generate-output true --aws-access-key-id *** --aws-secret-access-key *** --jihulab-token *** --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files source kubedb files CLUSTER_NAME:  `kubectl get namespace | grep ns-wlihu `(B   `kubectl create namespace ns-wlihu`(B  namespace/ns-wlihu created create namespace ns-wlihu done(B download kbcli  `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)`(B   `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.2`(B  Your system is linux_amd64 Installing kbcli ... Downloading ... kbcli installed successfully. Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.2 done(B Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Kubernetes Env: v1.32.10 check snapshot controller check snapshot controller done(B POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default (B KubeBlocks version is:1.0.2 skip upgrade KubeBlocks(B current KubeBlocks version: 1.0.2 check component definition set component name:graphd set component version set component version:nebula set service versions:v3.8.0,v3.5.0 set service versions sorted:v3.5.0,v3.8.0 set nebula component definition set nebula component definition nebula-storaged-1.0.1 REPORT_COUNT 0:0 set replicas first:2,v3.5.0|2,v3.8.0 set replicas second max again:2,v3.8.0 REPORT_COUNT 2:1 CLUSTER_TOPOLOGY:default cluster definition topology: default topology default found in cluster definition nebula set nebula component definition set nebula component definition nebula-storaged-1.0.1 LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 1 CLUSTER_NAME:nebula-yunpih pod_info: termination_policy:WipeOut create 2 replica WipeOut nebula cluster check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: nebula-graphd-1.0.1 by component version:nebula apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: nebula-yunpih namespace: ns-wlihu spec: clusterDef: nebula topology: default terminationPolicy: WipeOut componentSpecs: - name: graphd serviceVersion: v3.8.0 replicas: 2 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: metad serviceVersion: v3.8.0 replicas: 3 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: storaged serviceVersion: v3.8.0 replicas: 3 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi  `kubectl apply -f test_create_nebula-yunpih.yaml`(B  cluster.apps.kubeblocks.io/nebula-yunpih created apply test_create_nebula-yunpih.yaml Success(B  `rm -rf test_create_nebula-yunpih.yaml`(B  check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Creating Feb 12,2026 10:05 UTC+0800 clusterdefinition.kubeblocks.io/name=nebula cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:08 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:08 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:09 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:09 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:09 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check pod nebula-yunpih-graphd-0 container_name graphd exist password 9**530547CZtK#x3(B check pod nebula-yunpih-graphd-0 container_name agent exist password 9**530547CZtK#x3(B check pod nebula-yunpih-graphd-0 container_name exporter exist password 9**530547CZtK#x3(B check pod nebula-yunpih-graphd-0 container_name kbagent exist password 9**530547CZtK#x3(B Container kbagent logs contain secret password:2026-02-12T02:09:11Z INFO Action Executed {"action": "postProvision", "result": "(root@nebula) [(none)]> ALTER USER root WITH PASSWORD '9**530547CZtK#x3';\nExecution succeeded (time spent 2.743ms/2.96667ms)\n\nThu, 12 Feb 2026 02:09:11 UTC\n\n\n\nBye root!\nThu, 12 Feb 2026 02:09:11 UTC\n\n"}(B describe cluster  `kbcli cluster describe nebula-yunpih --namespace ns-wlihu `(B  Name: nebula-yunpih Created Time: Feb 12,2026 10:05 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-wlihu nebula default Running WipeOut Endpoints: COMPONENT INTERNAL EXTERNAL graphd nebula-yunpih-graphd.ns-wlihu.svc.cluster.local:9669 nebula-yunpih-graphd.ns-wlihu.svc.cluster.local:19669 nebula-yunpih-graphd.ns-wlihu.svc.cluster.local:19670 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME graphd v3.8.0 nebula-yunpih-graphd-0 Running 0 aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:08 UTC+0800 graphd v3.8.0 nebula-yunpih-graphd-1 Running 0 aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:08 UTC+0800 metad v3.8.0 nebula-yunpih-metad-0 Running 0 aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:05 UTC+0800 metad v3.8.0 nebula-yunpih-metad-1 Running 0 aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:05 UTC+0800 metad v3.8.0 nebula-yunpih-metad-2 Running 0 aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:05 UTC+0800 storaged v3.8.0 nebula-yunpih-storaged-0 Running 0 aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:09 UTC+0800 storaged v3.8.0 nebula-yunpih-storaged-1 Running 0 aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:09 UTC+0800 storaged v3.8.0 nebula-yunpih-storaged-2 Running 0 aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:09 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS graphd 100m / 100m 512Mi / 512Mi logs:1Gi default metad 100m / 100m 512Mi / 512Mi data:1Gi default logs:1Gi default storaged 100m / 100m 512Mi / 512Mi data:1Gi default logs:1Gi default Images: COMPONENT COMPONENT-DEFINITION IMAGE graphd nebula-graphd-1.0.1 docker.io/apecloud/nebula-graphd:v3.8.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/nebula-console:v3.8.0 docker.io/apecloud/kubeblocks-tools:0.9.4 metad nebula-metad-1.0.1 docker.io/apecloud/nebula-metad:v3.8.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/kubeblocks-tools:0.9.4 storaged nebula-storaged-1.0.1 docker.io/apecloud/nebula-storaged:v3.8.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/nebula-tool:1.0.0 docker.io/apecloud/kubeblocks-tools:0.9.4 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-wlihu nebula-yunpih  `kbcli cluster label nebula-yunpih app.kubernetes.io/instance- --namespace ns-wlihu `(B  label "app.kubernetes.io/instance" not found.  `kbcli cluster label nebula-yunpih app.kubernetes.io/instance=nebula-yunpih --namespace ns-wlihu `(B   `kbcli cluster label nebula-yunpih --list --namespace ns-wlihu `(B  NAME NAMESPACE LABELS nebula-yunpih ns-wlihu app.kubernetes.io/instance=nebula-yunpih clusterdefinition.kubeblocks.io/name=nebula label cluster app.kubernetes.io/instance=nebula-yunpih Success(B  `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=nebula-yunpih --namespace ns-wlihu `(B   `kbcli cluster label nebula-yunpih --list --namespace ns-wlihu `(B  NAME NAMESPACE LABELS nebula-yunpih ns-wlihu app.kubernetes.io/instance=nebula-yunpih case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=nebula label cluster case.name=kbcli.test1 Success(B  `kbcli cluster label nebula-yunpih case.name=kbcli.test2 --overwrite --namespace ns-wlihu `(B   `kbcli cluster label nebula-yunpih --list --namespace ns-wlihu `(B  NAME NAMESPACE LABELS nebula-yunpih ns-wlihu app.kubernetes.io/instance=nebula-yunpih case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=nebula label cluster case.name=kbcli.test2 Success(B  `kbcli cluster label nebula-yunpih case.name- --namespace ns-wlihu `(B   `kbcli cluster label nebula-yunpih --list --namespace ns-wlihu `(B  NAME NAMESPACE LABELS nebula-yunpih ns-wlihu app.kubernetes.io/instance=nebula-yunpih clusterdefinition.kubeblocks.io/name=nebula delete cluster label case.name Success(B cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B  `echo "echo \"SHOW HOSTS;\" | /usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash `(B  Welcome! (root@nebula) [(none)]> +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ | Host | Port | Status | Leader count | Leader distribution | Partition distribution | Version | +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ | "nebula-yunpih-storaged-0.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.8.0" | | "nebula-yunpih-storaged-1.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.8.0" | | "nebula-yunpih-storaged-2.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.8.0" | +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ Got 3 rows (time spent 869µs/1.40982ms) Thu, 12 Feb 2026 02:11:20 UTC (root@nebula) [(none)]> Bye root! Thu, 12 Feb 2026 02:11:20 UTC connect cluster Success(B cluster graphd scale-out cluster graphd scale-out replicas: 3 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-out nebula-yunpih --auto-approve --force=true --components graphd --replicas 1 --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-horizontalscaling-fbfkq created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-horizontalscaling-fbfkq -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-horizontalscaling-fbfkq ns-wlihu HorizontalScaling nebula-yunpih graphd Running 0/1 Feb 12,2026 10:11 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:08 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:08 UTC+0800 nebula-yunpih-graphd-2 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:11 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:09 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:09 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:09 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-horizontalscaling-fbfkq ns-wlihu HorizontalScaling nebula-yunpih graphd Succeed 1/1 Feb 12,2026 10:11 UTC+0800 check ops status done(B ops_status:nebula-yunpih-horizontalscaling-fbfkq ns-wlihu HorizontalScaling nebula-yunpih graphd Succeed 1/1 Feb 12,2026 10:11 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-horizontalscaling-fbfkq --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-horizontalscaling-fbfkq patched  `kbcli cluster delete-ops --name nebula-yunpih-horizontalscaling-fbfkq --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-horizontalscaling-fbfkq deleted cluster graphd scale-in cluster graphd scale-in replicas: 2 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-in nebula-yunpih --auto-approve --force=true --components graphd --replicas 1 --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-horizontalscaling-hpq8w created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-horizontalscaling-hpq8w -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-horizontalscaling-hpq8w ns-wlihu HorizontalScaling nebula-yunpih graphd Running 0/1 Feb 12,2026 10:12 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:08 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:08 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:05 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:09 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:09 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:09 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-horizontalscaling-hpq8w ns-wlihu HorizontalScaling nebula-yunpih graphd Succeed 1/1 Feb 12,2026 10:12 UTC+0800 check ops status done(B ops_status:nebula-yunpih-horizontalscaling-hpq8w ns-wlihu HorizontalScaling nebula-yunpih graphd Succeed 1/1 Feb 12,2026 10:12 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-horizontalscaling-hpq8w --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-horizontalscaling-hpq8w patched  `kbcli cluster delete-ops --name nebula-yunpih-horizontalscaling-hpq8w --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-horizontalscaling-hpq8w deleted cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart nebula-yunpih --auto-approve --force=true --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-restart-th6ks created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-restart-th6ks -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-restart-th6ks ns-wlihu Restart nebula-yunpih graphd,metad,storaged Running 0/8 Feb 12,2026 10:12 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:18 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:12 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:13 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:13 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:12 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:13 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:13 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:12 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-restart-th6ks ns-wlihu Restart nebula-yunpih graphd,metad,storaged Succeed 8/8 Feb 12,2026 10:12 UTC+0800 check ops status done(B ops_status:nebula-yunpih-restart-th6ks ns-wlihu Restart nebula-yunpih graphd,metad,storaged Succeed 8/8 Feb 12,2026 10:12 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-restart-th6ks --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-restart-th6ks patched  `kbcli cluster delete-ops --name nebula-yunpih-restart-th6ks --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-restart-th6ks deleted check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=storaged --namespace ns-wlihu | (grep "storaged" || true )`(B  cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart nebula-yunpih --auto-approve --force=true --components storaged --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-restart-ztr2x created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-restart-ztr2x -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-restart-ztr2x ns-wlihu Restart nebula-yunpih storaged Running 0/3 Feb 12,2026 10:19 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:18 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:12 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:13 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:13 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:12 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:20 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:19 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:19 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-restart-ztr2x ns-wlihu Restart nebula-yunpih storaged Succeed 3/3 Feb 12,2026 10:19 UTC+0800 check ops status done(B ops_status:nebula-yunpih-restart-ztr2x ns-wlihu Restart nebula-yunpih storaged Succeed 3/3 Feb 12,2026 10:19 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-restart-ztr2x --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-restart-ztr2x patched  `kbcli cluster delete-ops --name nebula-yunpih-restart-ztr2x --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-restart-ztr2x deleted check component metad exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=metad --namespace ns-wlihu | (grep "metad" || true )`(B  check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale nebula-yunpih --auto-approve --force=true --components metad --cpu 200m --memory 0.6Gi --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-verticalscaling-nd2vc created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-verticalscaling-nd2vc -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-verticalscaling-nd2vc ns-wlihu VerticalScaling nebula-yunpih metad Running 0/3 Feb 12,2026 10:21 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:18 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:12 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:22 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:20 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:19 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:19 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-verticalscaling-nd2vc ns-wlihu VerticalScaling nebula-yunpih metad Succeed 3/3 Feb 12,2026 10:21 UTC+0800 check ops status done(B ops_status:nebula-yunpih-verticalscaling-nd2vc ns-wlihu VerticalScaling nebula-yunpih metad Succeed 3/3 Feb 12,2026 10:21 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-verticalscaling-nd2vc --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-verticalscaling-nd2vc patched  `kbcli cluster delete-ops --name nebula-yunpih-verticalscaling-nd2vc --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-verticalscaling-nd2vc deleted check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=storaged --namespace ns-wlihu | (grep "storaged" || true )`(B  check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale nebula-yunpih --auto-approve --force=true --components storaged --cpu 200m --memory 0.6Gi --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-verticalscaling-zgnnj created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-verticalscaling-zgnnj -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-verticalscaling-zgnnj ns-wlihu VerticalScaling nebula-yunpih storaged Running 0/3 Feb 12,2026 10:22 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:18 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:12 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:22 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:24 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:23 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:22 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-verticalscaling-zgnnj ns-wlihu VerticalScaling nebula-yunpih storaged Succeed 3/3 Feb 12,2026 10:22 UTC+0800 check ops status done(B ops_status:nebula-yunpih-verticalscaling-zgnnj ns-wlihu VerticalScaling nebula-yunpih storaged Succeed 3/3 Feb 12,2026 10:22 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-verticalscaling-zgnnj --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-verticalscaling-zgnnj patched  `kbcli cluster delete-ops --name nebula-yunpih-verticalscaling-zgnnj --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-verticalscaling-zgnnj deleted cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart nebula-yunpih --auto-approve --force=true --components graphd --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-restart-dzwf6 created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-restart-dzwf6 -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-restart-dzwf6 ns-wlihu Restart nebula-yunpih graphd Running 0/2 Feb 12,2026 10:24 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B [Error] check cluster status timeout(B --------------------------------------get cluster nebula-yunpih yaml--------------------------------------  `kubectl get cluster nebula-yunpih -o yaml --namespace ns-wlihu `(B  apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: '[{"name":"nebula-yunpih-restart-dzwf6","type":"Restart"}]' kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps.kubeblocks.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"nebula-yunpih","namespace":"ns-wlihu"},"spec":{"clusterDef":"nebula","componentSpecs":[{"env":[{"name":"DEFAULT_TIMEZONE","value":"UTC+00:00:00"}],"name":"graphd","replicas":2,"resources":{"limits":{"cpu":"100m","memory":"0.5Gi"},"requests":{"cpu":"100m","memory":"0.5Gi"}},"serviceVersion":"v3.8.0","volumeClaimTemplates":[{"name":"logs","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}}]},{"env":[{"name":"DEFAULT_TIMEZONE","value":"UTC+00:00:00"}],"name":"metad","replicas":3,"resources":{"limits":{"cpu":"100m","memory":"0.5Gi"},"requests":{"cpu":"100m","memory":"0.5Gi"}},"serviceVersion":"v3.8.0","volumeClaimTemplates":[{"name":"data","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}},{"name":"logs","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}}]},{"env":[{"name":"DEFAULT_TIMEZONE","value":"UTC+00:00:00"}],"name":"storaged","replicas":3,"resources":{"limits":{"cpu":"100m","memory":"0.5Gi"},"requests":{"cpu":"100m","memory":"0.5Gi"}},"serviceVersion":"v3.8.0","volumeClaimTemplates":[{"name":"data","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}},{"name":"logs","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}}]}],"terminationPolicy":"WipeOut","topology":"default"}} creationTimestamp: "2026-02-12T02:05:10Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 9 labels: app.kubernetes.io/instance: nebula-yunpih clusterdefinition.kubeblocks.io/name: nebula name: nebula-yunpih namespace: ns-wlihu resourceVersion: "42371" uid: 341a2b52-7450-4566-bdf2-4b47d964ce7c spec: clusterDef: nebula componentSpecs: - annotations: kubeblocks.io/restart: "2026-02-12T02:24:46Z" componentDef: nebula-graphd-1.0.1 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 name: graphd podUpdatePolicy: PreferInPlace replicas: 2 resources: limits: cpu: 100m memory: 512Mi requests: cpu: 100m memory: 512Mi serviceVersion: v3.8.0 volumeClaimTemplates: - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - annotations: kubeblocks.io/restart: "2026-02-12T02:12:31Z" componentDef: nebula-metad-1.0.1 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 name: metad podUpdatePolicy: PreferInPlace replicas: 3 resources: limits: cpu: 200m memory: 644245094400m requests: cpu: 200m memory: 644245094400m serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - annotations: kubeblocks.io/restart: "2026-02-12T02:19:09Z" componentDef: nebula-storaged-1.0.1 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 name: storaged podUpdatePolicy: PreferInPlace replicas: 3 resources: limits: cpu: 200m memory: 644245094400m requests: cpu: 200m memory: 644245094400m serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi terminationPolicy: WipeOut topology: default status: components: graphd: observedGeneration: 9 phase: Updating upToDate: true metad: observedGeneration: 9 phase: Running upToDate: true storaged: message: InstanceSet/nebula-yunpih-storaged: '["nebula-yunpih-storaged-1"]' observedGeneration: 9 phase: Running upToDate: true conditions: - lastTransitionTime: "2026-02-12T02:05:10Z" message: 'The operator has started the provisioning of Cluster: nebula-yunpih' observedGeneration: 9 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2026-02-12T02:05:10Z" message: Successfully applied for resources observedGeneration: 9 reason: ApplyResourcesSucceed status: "True" type: ApplyResources - lastTransitionTime: "2026-02-12T02:18:56Z" message: cluster nebula-yunpih is ready reason: ClusterReady status: "True" type: Ready observedGeneration: 9 phase: Updating ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster nebula-yunpih--------------------------------------  `kubectl describe cluster nebula-yunpih --namespace ns-wlihu `(B  Name: nebula-yunpih Namespace: ns-wlihu Labels: app.kubernetes.io/instance=nebula-yunpih clusterdefinition.kubeblocks.io/name=nebula Annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: [{"name":"nebula-yunpih-restart-dzwf6","type":"Restart"}] API Version: apps.kubeblocks.io/v1 Kind: Cluster Metadata: Creation Timestamp: 2026-02-12T02:05:10Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 9 Resource Version: 42371 UID: 341a2b52-7450-4566-bdf2-4b47d964ce7c Spec: Cluster Def: nebula Component Specs: Annotations: kubeblocks.io/restart: 2026-02-12T02:24:46Z Component Def: nebula-graphd-1.0.1 Env: Name: DEFAULT_TIMEZONE Value: UTC+00:00:00 Name: graphd Pod Update Policy: PreferInPlace Replicas: 2 Resources: Limits: Cpu: 100m Memory: 512Mi Requests: Cpu: 100m Memory: 512Mi Service Version: v3.8.0 Volume Claim Templates: Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Annotations: kubeblocks.io/restart: 2026-02-12T02:12:31Z Component Def: nebula-metad-1.0.1 Env: Name: DEFAULT_TIMEZONE Value: UTC+00:00:00 Name: metad Pod Update Policy: PreferInPlace Replicas: 3 Resources: Limits: Cpu: 200m Memory: 644245094400m Requests: Cpu: 200m Memory: 644245094400m Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Annotations: kubeblocks.io/restart: 2026-02-12T02:19:09Z Component Def: nebula-storaged-1.0.1 Env: Name: DEFAULT_TIMEZONE Value: UTC+00:00:00 Name: storaged Pod Update Policy: PreferInPlace Replicas: 3 Resources: Limits: Cpu: 200m Memory: 644245094400m Requests: Cpu: 200m Memory: 644245094400m Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Termination Policy: WipeOut Topology: default Status: Components: Graphd: Observed Generation: 9 Phase: Updating Up To Date: true Metad: Observed Generation: 9 Phase: Running Up To Date: true Storaged: Message: InstanceSet/nebula-yunpih-storaged: ["nebula-yunpih-storaged-1"] Observed Generation: 9 Phase: Running Up To Date: true Conditions: Last Transition Time: 2026-02-12T02:05:10Z Message: The operator has started the provisioning of Cluster: nebula-yunpih Observed Generation: 9 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2026-02-12T02:05:10Z Message: Successfully applied for resources Observed Generation: 9 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Last Transition Time: 2026-02-12T02:18:56Z Message: cluster nebula-yunpih is ready Reason: ClusterReady Status: True Type: Ready Observed Generation: 9 Phase: Updating Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal PreCheckSucceed 27m (x2 over 27m) cluster-controller The operator has started the provisioning of Cluster: nebula-yunpih Normal ApplyResourcesSucceed 27m (x2 over 27m) cluster-controller Successfully applied for resources Normal ClusterComponentPhaseTransition 24m (x5 over 27m) cluster-controller cluster component metad is Creating Normal ClusterComponentPhaseTransition 23m (x2 over 23m) cluster-controller cluster component graphd is Creating Normal ClusterComponentPhaseTransition 22m (x12 over 23m) cluster-controller cluster component metad is Running Normal ClusterComponentPhaseTransition 11m (x48 over 22m) cluster-controller cluster component graphd is Running ------------------------------------------------------------------------------------------------------------------ check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:18 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Init:0/5 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:24 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:22 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:24 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:23 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:22 UTC+0800 logs:1Gi pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:PodInitializing(B check pod status done(B check cluster status again cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status again done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-restart-dzwf6 ns-wlihu Restart nebula-yunpih graphd Succeed 2/2 Feb 12,2026 10:24 UTC+0800 check ops status done(B ops_status:nebula-yunpih-restart-dzwf6 ns-wlihu Restart nebula-yunpih graphd Succeed 2/2 Feb 12,2026 10:24 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-restart-dzwf6 --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-restart-dzwf6 patched  `kbcli cluster delete-ops --name nebula-yunpih-restart-dzwf6 --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-restart-dzwf6 deleted check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale nebula-yunpih --auto-approve --force=true --components graphd --cpu 200m --memory 0.6Gi --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-verticalscaling-lstsz created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-verticalscaling-lstsz -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-verticalscaling-lstsz ns-wlihu VerticalScaling nebula-yunpih graphd Running 0/2 Feb 12,2026 10:38 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:38 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:38 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:22 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:24 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:23 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:22 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-verticalscaling-lstsz ns-wlihu VerticalScaling nebula-yunpih graphd Succeed 2/2 Feb 12,2026 10:38 UTC+0800 check ops status done(B ops_status:nebula-yunpih-verticalscaling-lstsz ns-wlihu VerticalScaling nebula-yunpih graphd Succeed 2/2 Feb 12,2026 10:38 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-verticalscaling-lstsz --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-verticalscaling-lstsz patched  `kbcli cluster delete-ops --name nebula-yunpih-verticalscaling-lstsz --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-verticalscaling-lstsz deleted check component metad exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=metad --namespace ns-wlihu | (grep "metad" || true )`(B  check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=storaged --namespace ns-wlihu | (grep "storaged" || true )`(B   `kubectl get pvc -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=metad,storaged,apps.kubeblocks.io/vct-name=data --namespace ns-wlihu `(B  nebula-yunpih metad,storaged data pvc is empty cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand nebula-yunpih --auto-approve --force=true --components metad,storaged --volume-claim-templates data --storage 2Gi --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-volumeexpansion-78x2l created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-volumeexpansion-78x2l -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-volumeexpansion-78x2l ns-wlihu VolumeExpansion nebula-yunpih metad,storaged Running 0/6 Feb 12,2026 10:39 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:38 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:38 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:22 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:21 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:24 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:23 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:22 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-volumeexpansion-78x2l ns-wlihu VolumeExpansion nebula-yunpih metad,storaged Succeed 6/6 Feb 12,2026 10:39 UTC+0800 check ops status done(B ops_status:nebula-yunpih-volumeexpansion-78x2l ns-wlihu VolumeExpansion nebula-yunpih metad,storaged Succeed 6/6 Feb 12,2026 10:39 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-volumeexpansion-78x2l --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-volumeexpansion-78x2l patched  `kbcli cluster delete-ops --name nebula-yunpih-volumeexpansion-78x2l --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-volumeexpansion-78x2l deleted cluster stop check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster stop nebula-yunpih --auto-approve --force=true --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-stop-9mbkz created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-stop-9mbkz -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-stop-9mbkz ns-wlihu Stop nebula-yunpih graphd,metad,storaged Running 0/8 Feb 12,2026 10:55 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Stopped Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Stopped(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-stop-9mbkz ns-wlihu Stop nebula-yunpih graphd,metad,storaged Succeed 8/8 Feb 12,2026 10:55 UTC+0800 check ops status done(B ops_status:nebula-yunpih-stop-9mbkz ns-wlihu Stop nebula-yunpih graphd,metad,storaged Succeed 8/8 Feb 12,2026 10:55 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-stop-9mbkz --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-stop-9mbkz patched  `kbcli cluster delete-ops --name nebula-yunpih-stop-9mbkz --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-stop-9mbkz deleted cluster start check cluster status before ops check cluster status done(B cluster_status:Stopped(B  `kbcli cluster start nebula-yunpih --force=true --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-start-rmzzf created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-start-rmzzf -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-start-rmzzf ns-wlihu Start nebula-yunpih graphd,metad,storaged Running 0/8 Feb 12,2026 10:55 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-start-rmzzf ns-wlihu Start nebula-yunpih graphd,metad,storaged Succeed 8/8 Feb 12,2026 10:55 UTC+0800 check ops status done(B ops_status:nebula-yunpih-start-rmzzf ns-wlihu Start nebula-yunpih graphd,metad,storaged Succeed 8/8 Feb 12,2026 10:55 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-start-rmzzf --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-start-rmzzf patched  `kbcli cluster delete-ops --name nebula-yunpih-start-rmzzf --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-start-rmzzf deleted check component metad exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=metad --namespace ns-wlihu | (grep "metad" || true )`(B  cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart nebula-yunpih --auto-approve --force=true --components metad --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-restart-7m5dt created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-restart-7m5dt -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-restart-7m5dt ns-wlihu Restart nebula-yunpih metad Running 0/3 Feb 12,2026 10:57 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:58 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:57 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:57 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-restart-7m5dt ns-wlihu Restart nebula-yunpih metad Succeed 3/3 Feb 12,2026 10:57 UTC+0800 check ops status done(B ops_status:nebula-yunpih-restart-7m5dt ns-wlihu Restart nebula-yunpih metad Succeed 3/3 Feb 12,2026 10:57 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-restart-7m5dt --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-restart-7m5dt patched  `kbcli cluster delete-ops --name nebula-yunpih-restart-7m5dt --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-restart-7m5dt deleted test failover connectionstress(B check cluster status before cluster-failover-connectionstress check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-connectionstress-nebula-yunpih --namespace ns-wlihu `(B   `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-nebula-yunpih namespace: ns-wlihu spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "nebula-yunpih-graphd.ns-wlihu.svc.cluster.local" - "--user" - "root" - "--password" - "9**530547CZtK#x3" - "--port" - "9669" - "--database" - "default" - "--dbtype" - "nebula" - "--test" - "connectionstress" - "--connections" - "300" - "--duration" - "60" restartPolicy: Never  `kubectl apply -f test-db-client-connectionstress-nebula-yunpih.yaml`(B  pod/test-db-client-connectionstress-nebula-yunpih created apply test-db-client-connectionstress-nebula-yunpih.yaml Success(B  `rm -rf test-db-client-connectionstress-nebula-yunpih.yaml`(B  check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 6s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 10s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 15s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 21s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 26s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 31s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 36s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 42s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 47s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 52s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 57s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 63s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 68s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 73s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 79s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 84s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 89s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 94s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 100s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 105s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 110s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 115s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m1s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m6s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m11s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m17s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m22s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m27s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m32s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m38s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 1/1 Running 0 2m43s(B check pod test-db-client-connectionstress-nebula-yunpih status done(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-yunpih 0/1 Completed 0 2m48s(B check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:58 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:57 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:57 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 03:00:39.459 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:00:39.459 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. CREATE SPACE default Successfully 03:00:49.469 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:00:49.469 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 03:00:49.471 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:00:49.471 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. CREATE SPACE default Successfully 03:00:59.484 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:00:59.484 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 03:00:59.492 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:00:59.492 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. CREATE SPACE default Successfully 03:01:09.507 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:01:09.508 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 03:01:09.513 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.47 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 03:01:09.513 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. CREATE SPACE default Successfully Releasing connections... Test Result: null Connection Information: Database Type: nebula Host: nebula-yunpih-graphd.ns-wlihu.svc.cluster.local Port: 9669 Database: default Table: User: root Org: Access Mode: mysql Test Type: connectionstress Connection Count: 300 Duration: 60 seconds  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-connectionstress-nebula-yunpih --namespace ns-wlihu `(B  pod/test-db-client-connectionstress-nebula-yunpih patched (no change) pod "test-db-client-connectionstress-nebula-yunpih" force deleted check failover pod name failover pod name:nebula-yunpih-graphd-0 failover connectionstress Success(B check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=storaged --namespace ns-wlihu | (grep "storaged" || true )`(B  cluster storaged scale-out cluster storaged scale-out replicas: 4 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-out nebula-yunpih --auto-approve --force=true --components storaged --replicas 2 --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-horizontalscaling-stjpd created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-horizontalscaling-stjpd -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-horizontalscaling-stjpd ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:01 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B [Error] check cluster status timeout(B --------------------------------------get cluster nebula-yunpih yaml--------------------------------------  `kubectl get cluster nebula-yunpih -o yaml --namespace ns-wlihu `(B  apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: '[{"name":"nebula-yunpih-horizontalscaling-stjpd","type":"HorizontalScaling"}]' kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps.kubeblocks.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"nebula-yunpih","namespace":"ns-wlihu"},"spec":{"clusterDef":"nebula","componentSpecs":[{"env":[{"name":"DEFAULT_TIMEZONE","value":"UTC+00:00:00"}],"name":"graphd","replicas":2,"resources":{"limits":{"cpu":"100m","memory":"0.5Gi"},"requests":{"cpu":"100m","memory":"0.5Gi"}},"serviceVersion":"v3.8.0","volumeClaimTemplates":[{"name":"logs","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}}]},{"env":[{"name":"DEFAULT_TIMEZONE","value":"UTC+00:00:00"}],"name":"metad","replicas":3,"resources":{"limits":{"cpu":"100m","memory":"0.5Gi"},"requests":{"cpu":"100m","memory":"0.5Gi"}},"serviceVersion":"v3.8.0","volumeClaimTemplates":[{"name":"data","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}},{"name":"logs","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}}]},{"env":[{"name":"DEFAULT_TIMEZONE","value":"UTC+00:00:00"}],"name":"storaged","replicas":3,"resources":{"limits":{"cpu":"100m","memory":"0.5Gi"},"requests":{"cpu":"100m","memory":"0.5Gi"}},"serviceVersion":"v3.8.0","volumeClaimTemplates":[{"name":"data","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}},{"name":"logs","spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":null}}]}],"terminationPolicy":"WipeOut","topology":"default"}} creationTimestamp: "2026-02-12T02:05:10Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 15 labels: app.kubernetes.io/instance: nebula-yunpih clusterdefinition.kubeblocks.io/name: nebula name: nebula-yunpih namespace: ns-wlihu resourceVersion: "81998" uid: 341a2b52-7450-4566-bdf2-4b47d964ce7c spec: clusterDef: nebula componentSpecs: - annotations: kubeblocks.io/restart: "2026-02-12T02:24:46Z" componentDef: nebula-graphd-1.0.1 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 name: graphd podUpdatePolicy: PreferInPlace replicas: 2 resources: limits: cpu: 200m memory: 644245094400m requests: cpu: 200m memory: 644245094400m serviceVersion: v3.8.0 volumeClaimTemplates: - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - annotations: kubeblocks.io/restart: "2026-02-12T02:57:14Z" componentDef: nebula-metad-1.0.1 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 name: metad podUpdatePolicy: PreferInPlace replicas: 3 resources: limits: cpu: 200m memory: 644245094400m requests: cpu: 200m memory: 644245094400m serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - annotations: kubeblocks.io/restart: "2026-02-12T02:19:09Z" componentDef: nebula-storaged-1.0.1 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 name: storaged podUpdatePolicy: PreferInPlace replicas: 5 resources: limits: cpu: 200m memory: 644245094400m requests: cpu: 200m memory: 644245094400m serviceVersion: v3.8.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi - name: logs spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi terminationPolicy: WipeOut topology: default status: components: graphd: observedGeneration: 15 phase: Running upToDate: true metad: observedGeneration: 15 phase: Running upToDate: true storaged: message: InstanceSet/nebula-yunpih-storaged: '["nebula-yunpih-storaged-1"]' observedGeneration: 15 phase: Updating upToDate: true conditions: - lastTransitionTime: "2026-02-12T02:05:10Z" message: 'The operator has started the provisioning of Cluster: nebula-yunpih' observedGeneration: 15 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2026-02-12T02:05:10Z" message: Successfully applied for resources observedGeneration: 15 reason: ApplyResourcesSucceed status: "True" type: ApplyResources - lastTransitionTime: "2026-02-12T02:56:54Z" message: cluster nebula-yunpih is ready reason: ClusterReady status: "True" type: Ready observedGeneration: 15 phase: Updating ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster nebula-yunpih--------------------------------------  `kubectl describe cluster nebula-yunpih --namespace ns-wlihu `(B  Name: nebula-yunpih Namespace: ns-wlihu Labels: app.kubernetes.io/instance=nebula-yunpih clusterdefinition.kubeblocks.io/name=nebula Annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: [{"name":"nebula-yunpih-horizontalscaling-stjpd","type":"HorizontalScaling"}] API Version: apps.kubeblocks.io/v1 Kind: Cluster Metadata: Creation Timestamp: 2026-02-12T02:05:10Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 15 Resource Version: 81998 UID: 341a2b52-7450-4566-bdf2-4b47d964ce7c Spec: Cluster Def: nebula Component Specs: Annotations: kubeblocks.io/restart: 2026-02-12T02:24:46Z Component Def: nebula-graphd-1.0.1 Env: Name: DEFAULT_TIMEZONE Value: UTC+00:00:00 Name: graphd Pod Update Policy: PreferInPlace Replicas: 2 Resources: Limits: Cpu: 200m Memory: 644245094400m Requests: Cpu: 200m Memory: 644245094400m Service Version: v3.8.0 Volume Claim Templates: Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Annotations: kubeblocks.io/restart: 2026-02-12T02:57:14Z Component Def: nebula-metad-1.0.1 Env: Name: DEFAULT_TIMEZONE Value: UTC+00:00:00 Name: metad Pod Update Policy: PreferInPlace Replicas: 3 Resources: Limits: Cpu: 200m Memory: 644245094400m Requests: Cpu: 200m Memory: 644245094400m Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 2Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Annotations: kubeblocks.io/restart: 2026-02-12T02:19:09Z Component Def: nebula-storaged-1.0.1 Env: Name: DEFAULT_TIMEZONE Value: UTC+00:00:00 Name: storaged Pod Update Policy: PreferInPlace Replicas: 5 Resources: Limits: Cpu: 200m Memory: 644245094400m Requests: Cpu: 200m Memory: 644245094400m Service Version: v3.8.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 2Gi Name: logs Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 1Gi Termination Policy: WipeOut Topology: default Status: Components: Graphd: Observed Generation: 15 Phase: Running Up To Date: true Metad: Observed Generation: 15 Phase: Running Up To Date: true Storaged: Message: InstanceSet/nebula-yunpih-storaged: ["nebula-yunpih-storaged-1"] Observed Generation: 15 Phase: Updating Up To Date: true Conditions: Last Transition Time: 2026-02-12T02:05:10Z Message: The operator has started the provisioning of Cluster: nebula-yunpih Observed Generation: 15 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2026-02-12T02:05:10Z Message: Successfully applied for resources Observed Generation: 15 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Last Transition Time: 2026-02-12T02:56:54Z Message: cluster nebula-yunpih is ready Reason: ClusterReady Status: True Type: Ready Observed Generation: 15 Phase: Updating Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ClusterComponentPhaseTransition 59m (x2 over 59m) cluster-controller cluster component graphd is Creating Normal ClusterComponentPhaseTransition 31m (x111 over 60m) cluster-controller cluster component metad is Running Normal ClusterComponentPhaseTransition 15m (x100 over 58m) cluster-controller cluster component storaged is Running Normal ClusterComponentPhaseTransition 15m (x21 over 56m) cluster-controller cluster component metad is Updating Normal ClusterComponentPhaseTransition 13m (x6 over 13m) cluster-controller cluster component storaged is Stopped Normal ClusterComponentPhaseTransition 7m35s (x134 over 59m) cluster-controller cluster component graphd is Running ------------------------------------------------------------------------------------------------------------------ check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:58 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:57 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:57 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-3 ns-wlihu nebula-yunpih storaged Init:0/5 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 11:01 UTC+0800 logs:1Gi nebula-yunpih-storaged-4 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-default-27607381-vmss000000/10.224.0.4 Feb 12,2026 11:01 UTC+0800 logs:1Gi pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:0/5(B pod_status:Init:2/5(B check pod status done(B check cluster status again cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status again done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-horizontalscaling-stjpd ns-wlihu HorizontalScaling nebula-yunpih storaged Succeed 2/2 Feb 12,2026 11:01 UTC+0800 check ops status done(B ops_status:nebula-yunpih-horizontalscaling-stjpd ns-wlihu HorizontalScaling nebula-yunpih storaged Succeed 2/2 Feb 12,2026 11:01 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-horizontalscaling-stjpd --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-horizontalscaling-stjpd patched  `kbcli cluster delete-ops --name nebula-yunpih-horizontalscaling-stjpd --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-horizontalscaling-stjpd deleted check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=storaged --namespace ns-wlihu | (grep "storaged" || true )`(B  cluster storaged scale-in cluster storaged scale-in replicas: 2 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-in nebula-yunpih --auto-approve --force=true --components storaged --replicas 2 --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-horizontalscaling-995lx created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-horizontalscaling-995lx -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Creating -/- Feb 12,2026 11:10 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:1Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:58 UTC+0800 logs:1Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:57 UTC+0800 logs:1Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:57 UTC+0800 logs:1Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 logs:1Gi nebula-yunpih-storaged-3 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 11:01 UTC+0800 logs:1Gi nebula-yunpih-storaged-4 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-default-27607381-vmss000000/10.224.0.4 Feb 12,2026 11:01 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:10 UTC+0800 ops_status:nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:10 UTC+0800 (B ops_status:nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:10 UTC+0800 (B ops_status:nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:10 UTC+0800 (B ops_status:nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:10 UTC+0800 (B ops_status:nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:10 UTC+0800 (B ops_status:nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:10 UTC+0800 (B ops_status:nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Running 0/2 Feb 12,2026 11:10 UTC+0800 (B check ops status done(B ops_status:nebula-yunpih-horizontalscaling-995lx ns-wlihu HorizontalScaling nebula-yunpih storaged Succeed 2/2 Feb 12,2026 11:10 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-horizontalscaling-995lx --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-horizontalscaling-995lx patched  `kbcli cluster delete-ops --name nebula-yunpih-horizontalscaling-995lx --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-horizontalscaling-995lx deleted check component graphd exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=graphd --namespace ns-wlihu | (grep "graphd" || true )`(B  check component metad exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=metad --namespace ns-wlihu | (grep "metad" || true )`(B  check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=storaged --namespace ns-wlihu | (grep "storaged" || true )`(B   `kubectl get pvc -l app.kubernetes.io/instance=nebula-yunpih,apps.kubeblocks.io/component-name=graphd,metad,storaged,apps.kubeblocks.io/vct-name=logs --namespace ns-wlihu `(B  nebula-yunpih graphd,metad,storaged logs pvc is empty cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand nebula-yunpih --auto-approve --force=true --components graphd,metad,storaged --volume-claim-templates logs --storage 7Gi --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-volumeexpansion-l5nqg created successfully, you can view the progress: kbcli cluster describe-ops nebula-yunpih-volumeexpansion-l5nqg -n ns-wlihu check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-volumeexpansion-l5nqg ns-wlihu VolumeExpansion nebula-yunpih graphd,metad,storaged Running 0/8 Feb 12,2026 11:11 UTC+0800 check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Updating Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:7Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:7Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:58 UTC+0800 logs:7Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:57 UTC+0800 logs:7Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:57 UTC+0800 logs:7Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:55 UTC+0800 logs:7Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 logs:7Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 logs:7Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-yunpih --status all --namespace ns-wlihu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-yunpih-volumeexpansion-l5nqg ns-wlihu VolumeExpansion nebula-yunpih graphd,metad,storaged Succeed 8/8 Feb 12,2026 11:11 UTC+0800 check ops status done(B ops_status:nebula-yunpih-volumeexpansion-l5nqg ns-wlihu VolumeExpansion nebula-yunpih graphd,metad,storaged Succeed 8/8 Feb 12,2026 11:11 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-yunpih-volumeexpansion-l5nqg --namespace ns-wlihu `(B  opsrequest.operations.kubeblocks.io/nebula-yunpih-volumeexpansion-l5nqg patched  `kbcli cluster delete-ops --name nebula-yunpih-volumeexpansion-l5nqg --force --auto-approve --namespace ns-wlihu `(B  OpsRequest nebula-yunpih-volumeexpansion-l5nqg deleted cluster update terminationPolicy WipeOut  `kbcli cluster update nebula-yunpih --termination-policy=WipeOut --namespace ns-wlihu `(B  cluster.apps.kubeblocks.io/nebula-yunpih updated (no change) check cluster status  `kbcli cluster list nebula-yunpih --show-labels --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-yunpih ns-wlihu nebula WipeOut Running Feb 12,2026 10:05 UTC+0800 app.kubernetes.io/instance=nebula-yunpih,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-yunpih --namespace ns-wlihu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-yunpih-graphd-0 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:7Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-graphd-1 ns-wlihu nebula-yunpih graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:7Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 nebula-yunpih-metad-0 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:58 UTC+0800 logs:7Gi nebula-yunpih-metad-1 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:57 UTC+0800 logs:7Gi nebula-yunpih-metad-2 ns-wlihu nebula-yunpih metad Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:57 UTC+0800 logs:7Gi nebula-yunpih-storaged-0 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000001/10.224.0.8 Feb 12,2026 10:55 UTC+0800 logs:7Gi nebula-yunpih-storaged-1 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000000/10.224.0.9 Feb 12,2026 10:55 UTC+0800 logs:7Gi nebula-yunpih-storaged-2 ns-wlihu nebula-yunpih storaged Running 0 200m / 200m 644245094400m / 644245094400m data:2Gi aks-cicdamdpool-17242166-vmss000003/10.224.0.6 Feb 12,2026 10:55 UTC+0800 logs:7Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-yunpih`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-yunpih-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:9**530547CZtK#x3;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-yunpih-graphd.ns-wlihu.svc.cluster.local --user root --password '9**530547CZtK#x3' --port 9669" | kubectl exec -it nebula-yunpih-storaged-0 --namespace ns-wlihu -- bash`(B  check cluster connect done(B cluster list-logs  `kbcli cluster list-logs nebula-yunpih --namespace ns-wlihu `(B  cluster logs  `kbcli cluster logs nebula-yunpih --tail 30 --namespace ns-wlihu `(B  I20260212 03:10:47.110862 62 MetaClient.cpp:3269] Load leader ok I20260212 03:10:58.478745 25 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.4.20]:58458 I20260212 03:10:59.504594 21 SwitchSpaceExecutor.cpp:45] Graph switched to `default', space id: 1 E20260212 03:10:59.520146 23 QueryInstance.cpp:151] There are still space on the host, query: DROP HOSTS "nebula-yunpih-storaged-3.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 ==> /usr/local/nebula/logs/nebula-graphd.WARNING <== E20260212 03:10:59.520146 23 QueryInstance.cpp:151] There are still space on the host, query: DROP HOSTS "nebula-yunpih-storaged-3.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 ==> /usr/local/nebula/logs/nebula-graphd.ERROR <== E20260212 03:10:59.520146 23 QueryInstance.cpp:151] There are still space on the host, query: DROP HOSTS "nebula-yunpih-storaged-3.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 ==> /usr/local/nebula/logs/nebula-graphd.INFO <== I20260212 03:11:02.266201 21 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.3.16]:41394 I20260212 03:11:03.296245 21 SwitchSpaceExecutor.cpp:45] Graph switched to `default', space id: 1 I20260212 03:11:03.299889 25 SwitchSpaceExecutor.cpp:45] Graph switched to `default', space id: 1 I20260212 03:11:07.139614 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-0.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:07.139652 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-1.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:07.139660 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-2.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:07.139667 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-3.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:07.139671 62 MetaClient.cpp:3269] Load leader ok I20260212 03:11:09.389228 23 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.3.16]:38308 I20260212 03:11:17.156420 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-0.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:17.156459 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-1.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:17.156467 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-2.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:17.156472 62 MetaClient.cpp:3269] Load leader ok I20260212 03:11:27.165230 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-0.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:27.165269 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-1.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:27.165278 62 MetaClient.cpp:3263] Load leader of "nebula-yunpih-storaged-2.nebula-yunpih-storaged-headless.ns-wlihu.svc.cluster.local":9779 in 1 space I20260212 03:11:27.165283 62 MetaClient.cpp:3269] Load leader ok I20260212 03:27:23.197328 27 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.3.16]:56990 delete cluster nebula-yunpih  `kbcli cluster delete nebula-yunpih --auto-approve --namespace ns-wlihu `(B  pod_info:nebula-yunpih-graphd-0 5/5 Running 0 32m nebula-yunpih-graphd-1 5/5 Running 0 32m nebula-yunpih-metad-0 4/4 Running 0 29m nebula-yunpih-metad-1 4/4 Running 0 29m nebula-yunpih-metad-2 4/4 Running 0 30m nebula-yunpih-storaged-0 5/5 Running 0 32m nebula-yunpih-storaged-1 5/5 Running 1 (31m ago) 32m nebula-yunpih-storaged-2 5/5 Running 0 32m Cluster nebula-yunpih deleted delete cluster pod done(B check cluster resource non-exist OK: pvc(B delete cluster done(B Nebula Test Suite All Done!(B Test Engine: nebula Test Type: 12 --------------------------------------Nebula v3.8.0 (Topology = default Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[Topology=default;ComponentDefinition=nebula-graphd-1.0.1;ComponentVersion=nebula;ServiceVersion=v3.8.0;]|[Description=Create a cluster with the specified topology default with the specified component definition nebula-graphd-1.0.1 and component version nebula and service version v3.8.0](B [PASSED]|[Connect]|[ComponentName=graphd]|[Description=Connect to the cluster](B [PASSED]|[HorizontalScaling Out]|[ComponentName=graphd]|[Description=HorizontalScaling Out the cluster specify component graphd](B [PASSED]|[HorizontalScaling In]|[ComponentName=graphd]|[Description=HorizontalScaling In the cluster specify component graphd](B [PASSED]|[Restart]|[-]|[Description=Restart the cluster](B [PASSED]|[Restart]|[ComponentName=storaged]|[Description=Restart the cluster specify component storaged](B [PASSED]|[VerticalScaling]|[ComponentName=metad]|[Description=VerticalScaling the cluster specify component metad](B [PASSED]|[VerticalScaling]|[ComponentName=storaged]|[Description=VerticalScaling the cluster specify component storaged](B [PASSED]|[Restart]|[ComponentName=graphd]|[Description=Restart the cluster specify component graphd](B [PASSED]|[VerticalScaling]|[ComponentName=graphd]|[Description=VerticalScaling the cluster specify component graphd](B [PASSED]|[VolumeExpansion]|[ComponentName=metad;ComponentVolume=data]|[Description=VolumeExpansion the cluster specify component metad and volume data](B [PASSED]|[Stop]|[-]|[Description=Stop the cluster](B [PASSED]|[Start]|[-]|[Description=Start the cluster](B [PASSED]|[Restart]|[ComponentName=metad]|[Description=Restart the cluster specify component metad](B [PASSED]|[NoFailover]|[HA=Connection Stress;ComponentName=graphd]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.](B [PASSED]|[HorizontalScaling Out]|[ComponentName=storaged]|[Description=HorizontalScaling Out the cluster specify component storaged](B [PASSED]|[HorizontalScaling In]|[ComponentName=storaged]|[Description=HorizontalScaling In the cluster specify component storaged](B [PASSED]|[VolumeExpansion]|[ComponentName=graphd,metad,storaged;ComponentVolume=logs]|[Description=VolumeExpansion the cluster specify component graphd,metad,storaged and volume logs](B [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut](B [PASSED]|[Delete]|[-]|[Description=Delete the cluster](B [END]