https://github.com/apecloud/kubeblocks/actions/runs/21898070903 previous_version: kubeblocks_version:1.0.2 bash test/kbcli/test_kbcli_1.0.sh --type 12 --version 1.0.2 --generate-output true --chaos-mesh true --aws-access-key-id *** --aws-secret-access-key *** --jihulab-token *** --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files source kubedb files CLUSTER_NAME:  `kubectl get namespace | grep ns-fsore `(B   `kubectl create namespace ns-fsore`(B  namespace/ns-fsore created create namespace ns-fsore done(B download kbcli  `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)`(B   `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.2`(B  Your system is linux_amd64 Installing kbcli ... Downloading ... kbcli installed successfully. Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.2 done(B Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Kubernetes Env: v1.32.10 check snapshot controller check snapshot controller done(B POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default (B KubeBlocks version is:1.0.2 skip upgrade KubeBlocks(B current KubeBlocks version: 1.0.2 helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:graphd set component version set component version:nebula set service versions:v3.8.0,v3.5.0 set service versions sorted:v3.5.0,v3.8.0 set nebula component definition set nebula component definition nebula-graphd-1.0.1 REPORT_COUNT 0:0 set replicas first:2,v3.5.0|2,v3.8.0 set replicas third:2,v3.5.0 set replicas fourth:2,v3.5.0 set minimum cmpv service version set minimum cmpv service version replicas:2,v3.5.0 set replicas end:2,v3.5.0 REPORT_COUNT:1 CLUSTER_TOPOLOGY:default cluster definition topology: default topology default found in cluster definition nebula set nebula component definition set nebula component definition nebula-storaged-1.0.1 LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 1 CLUSTER_NAME:nebula-lfnhoq pod_info: termination_policy:WipeOut create 2 replica WipeOut nebula cluster check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: nebula-graphd-1.0.1 by component version:nebula apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: nebula-lfnhoq namespace: ns-fsore spec: clusterDef: nebula topology: default terminationPolicy: WipeOut componentSpecs: - name: graphd serviceVersion: v3.5.0 replicas: 2 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: metad serviceVersion: v3.5.0 replicas: 3 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: storaged serviceVersion: v3.5.0 replicas: 3 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi  `kubectl apply -f test_create_nebula-lfnhoq.yaml`(B  cluster.apps.kubeblocks.io/nebula-lfnhoq created apply test_create_nebula-lfnhoq.yaml Success(B  `rm -rf test_create_nebula-lfnhoq.yaml`(B  check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Creating Feb 11,2026 17:29 UTC+0800 clusterdefinition.kubeblocks.io/name=nebula cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:30 UTC+0800 logs:1Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 logs:1Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 logs:1Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:31 UTC+0800 logs:1Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:31 UTC+0800 logs:1Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:31 UTC+0800 logs:1Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check pod nebula-lfnhoq-graphd-0 container_name graphd exist password 015&uQ3fl&40k28!(B check pod nebula-lfnhoq-graphd-0 container_name agent exist password 015&uQ3fl&40k28!(B check pod nebula-lfnhoq-graphd-0 container_name exporter exist password 015&uQ3fl&40k28!(B check pod nebula-lfnhoq-graphd-0 container_name kbagent exist password 015&uQ3fl&40k28!(B check pod nebula-lfnhoq-graphd-0 container_name config-manager exist password 015&uQ3fl&40k28!(B No container logs contain secret password.(B describe cluster  `kbcli cluster describe nebula-lfnhoq --namespace ns-fsore `(B  Name: nebula-lfnhoq Created Time: Feb 11,2026 17:29 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-fsore nebula default Running WipeOut Endpoints: COMPONENT INTERNAL EXTERNAL graphd nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local:9669 nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local:19669 nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local:19670 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME graphd v3.5.0 nebula-lfnhoq-graphd-0 Running 0 aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 graphd v3.5.0 nebula-lfnhoq-graphd-1 Running 0 aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 metad v3.5.0 nebula-lfnhoq-metad-0 Running 0 aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:30 UTC+0800 metad v3.5.0 nebula-lfnhoq-metad-1 Running 0 aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 metad v3.5.0 nebula-lfnhoq-metad-2 Running 0 aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 storaged v3.5.0 nebula-lfnhoq-storaged-0 Running 0 aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:31 UTC+0800 storaged v3.5.0 nebula-lfnhoq-storaged-1 Running 0 aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:31 UTC+0800 storaged v3.5.0 nebula-lfnhoq-storaged-2 Running 0 aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:31 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS graphd 100m / 100m 512Mi / 512Mi logs:1Gi default metad 100m / 100m 512Mi / 512Mi data:1Gi default logs:1Gi default storaged 100m / 100m 512Mi / 512Mi data:1Gi default logs:1Gi default Images: COMPONENT COMPONENT-DEFINITION IMAGE graphd nebula-graphd-1.0.1 docker.io/apecloud/nebula-graphd:v3.5.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/nebula-console:v3.8.0 docker.io/apecloud/kubeblocks-tools:0.9.4 metad nebula-metad-1.0.1 docker.io/apecloud/nebula-metad:v3.5.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/kubeblocks-tools:0.9.4 storaged nebula-storaged-1.0.1 docker.io/apecloud/nebula-storaged:v3.5.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/nebula-tool:1.0.0 docker.io/apecloud/kubeblocks-tools:0.9.4 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-fsore nebula-lfnhoq  `kbcli cluster label nebula-lfnhoq app.kubernetes.io/instance- --namespace ns-fsore `(B  label "app.kubernetes.io/instance" not found.  `kbcli cluster label nebula-lfnhoq app.kubernetes.io/instance=nebula-lfnhoq --namespace ns-fsore `(B   `kbcli cluster label nebula-lfnhoq --list --namespace ns-fsore `(B  NAME NAMESPACE LABELS nebula-lfnhoq ns-fsore app.kubernetes.io/instance=nebula-lfnhoq clusterdefinition.kubeblocks.io/name=nebula label cluster app.kubernetes.io/instance=nebula-lfnhoq Success(B  `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=nebula-lfnhoq --namespace ns-fsore `(B   `kbcli cluster label nebula-lfnhoq --list --namespace ns-fsore `(B  NAME NAMESPACE LABELS nebula-lfnhoq ns-fsore app.kubernetes.io/instance=nebula-lfnhoq case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=nebula label cluster case.name=kbcli.test1 Success(B  `kbcli cluster label nebula-lfnhoq case.name=kbcli.test2 --overwrite --namespace ns-fsore `(B   `kbcli cluster label nebula-lfnhoq --list --namespace ns-fsore `(B  NAME NAMESPACE LABELS nebula-lfnhoq ns-fsore app.kubernetes.io/instance=nebula-lfnhoq case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=nebula label cluster case.name=kbcli.test2 Success(B  `kbcli cluster label nebula-lfnhoq case.name- --namespace ns-fsore `(B   `kbcli cluster label nebula-lfnhoq --list --namespace ns-fsore `(B  NAME NAMESPACE LABELS nebula-lfnhoq ns-fsore app.kubernetes.io/instance=nebula-lfnhoq clusterdefinition.kubeblocks.io/name=nebula delete cluster label case.name Success(B cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B  `echo "echo \"SHOW HOSTS;\" | /usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash `(B  Welcome! (root@nebula) [(none)]> +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ | Host | Port | Status | Leader count | Leader distribution | Partition distribution | Version | +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ | "nebula-lfnhoq-storaged-0.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.5.0" | | "nebula-lfnhoq-storaged-1.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.5.0" | | "nebula-lfnhoq-storaged-2.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.5.0" | +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ Got 3 rows (time spent 792µs/1.24301ms) Wed, 11 Feb 2026 09:32:57 UTC (root@nebula) [(none)]> Bye root! Wed, 11 Feb 2026 09:32:57 UTC connect cluster Success(B check component graphd exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=graphd --namespace ns-fsore | (grep "graphd" || true )`(B  check component metad exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=metad --namespace ns-fsore | (grep "metad" || true )`(B  check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=storaged --namespace ns-fsore | (grep "storaged" || true )`(B   `kubectl get pvc -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=graphd,metad,storaged,apps.kubeblocks.io/vct-name=logs --namespace ns-fsore `(B  nebula-lfnhoq graphd,metad,storaged logs pvc is empty cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand nebula-lfnhoq --auto-approve --force=true --components graphd,metad,storaged --volume-claim-templates logs --storage 2Gi --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-volumeexpansion-ddts5 created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-volumeexpansion-ddts5 -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-volumeexpansion-ddts5 ns-fsore VolumeExpansion nebula-lfnhoq graphd,metad,storaged Pending -/- Feb 11,2026 17:32 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:31 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-volumeexpansion-ddts5 ns-fsore VolumeExpansion nebula-lfnhoq graphd,metad,storaged Succeed 8/8 Feb 11,2026 17:32 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-volumeexpansion-ddts5 ns-fsore VolumeExpansion nebula-lfnhoq graphd,metad,storaged Succeed 8/8 Feb 11,2026 17:32 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-volumeexpansion-ddts5 --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-volumeexpansion-ddts5 patched  `kbcli cluster delete-ops --name nebula-lfnhoq-volumeexpansion-ddts5 --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-volumeexpansion-ddts5 deleted cluster graphd scale-out cluster graphd scale-out replicas: 3 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-out nebula-lfnhoq --auto-approve --force=true --components graphd --replicas 1 --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-horizontalscaling-k2b4q created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-horizontalscaling-k2b4q -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-horizontalscaling-k2b4q ns-fsore HorizontalScaling nebula-lfnhoq graphd Creating -/- Feb 11,2026 17:39 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-graphd-2 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:39 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:31 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-horizontalscaling-k2b4q ns-fsore HorizontalScaling nebula-lfnhoq graphd Succeed 1/1 Feb 11,2026 17:39 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-horizontalscaling-k2b4q ns-fsore HorizontalScaling nebula-lfnhoq graphd Succeed 1/1 Feb 11,2026 17:39 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-horizontalscaling-k2b4q --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-horizontalscaling-k2b4q patched  `kbcli cluster delete-ops --name nebula-lfnhoq-horizontalscaling-k2b4q --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-horizontalscaling-k2b4q deleted cluster graphd scale-in cluster graphd scale-in replicas: 2 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-in nebula-lfnhoq --auto-approve --force=true --components graphd --replicas 1 --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-horizontalscaling-tz7fw created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-horizontalscaling-tz7fw -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-horizontalscaling-tz7fw ns-fsore HorizontalScaling nebula-lfnhoq graphd Creating -/- Feb 11,2026 17:40 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Running Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:31 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-horizontalscaling-tz7fw ns-fsore HorizontalScaling nebula-lfnhoq graphd Succeed 1/1 Feb 11,2026 17:40 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-horizontalscaling-tz7fw ns-fsore HorizontalScaling nebula-lfnhoq graphd Succeed 1/1 Feb 11,2026 17:40 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-horizontalscaling-tz7fw --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-horizontalscaling-tz7fw patched  `kbcli cluster delete-ops --name nebula-lfnhoq-horizontalscaling-tz7fw --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-horizontalscaling-tz7fw deleted check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=storaged --namespace ns-fsore | (grep "storaged" || true )`(B  cluster storaged scale-out cluster storaged scale-out replicas: 4 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-out nebula-lfnhoq --auto-approve --force=true --components storaged --replicas 2 --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-horizontalscaling-877cc created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-horizontalscaling-877cc -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-horizontalscaling-877cc ns-fsore HorizontalScaling nebula-lfnhoq storaged Running -/- Feb 11,2026 17:40 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-3 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:40 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-4 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:40 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-horizontalscaling-877cc ns-fsore HorizontalScaling nebula-lfnhoq storaged Succeed 2/2 Feb 11,2026 17:40 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-horizontalscaling-877cc ns-fsore HorizontalScaling nebula-lfnhoq storaged Succeed 2/2 Feb 11,2026 17:40 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-horizontalscaling-877cc --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-horizontalscaling-877cc patched  `kbcli cluster delete-ops --name nebula-lfnhoq-horizontalscaling-877cc --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-horizontalscaling-877cc deleted check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=storaged --namespace ns-fsore | (grep "storaged" || true )`(B  cluster storaged scale-in cluster storaged scale-in replicas: 2 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-in nebula-lfnhoq --auto-approve --force=true --components storaged --replicas 2 --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-horizontalscaling-cq9xq created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-horizontalscaling-cq9xq -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-horizontalscaling-cq9xq ns-fsore HorizontalScaling nebula-lfnhoq storaged Creating -/- Feb 11,2026 17:41 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Running Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:30 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:31 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:31 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-horizontalscaling-cq9xq ns-fsore HorizontalScaling nebula-lfnhoq storaged Succeed 2/2 Feb 11,2026 17:41 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-horizontalscaling-cq9xq ns-fsore HorizontalScaling nebula-lfnhoq storaged Succeed 2/2 Feb 11,2026 17:41 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-horizontalscaling-cq9xq --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-horizontalscaling-cq9xq patched  `kbcli cluster delete-ops --name nebula-lfnhoq-horizontalscaling-cq9xq --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-horizontalscaling-cq9xq deleted cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart nebula-lfnhoq --auto-approve --force=true --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-restart-rntfq created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-restart-rntfq -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-restart-rntfq ns-fsore Restart nebula-lfnhoq graphd,metad,storaged Creating -/- Feb 11,2026 17:42 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:42 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:42 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:43 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:42 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:42 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:43 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:42 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:42 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-restart-rntfq ns-fsore Restart nebula-lfnhoq graphd,metad,storaged Succeed 8/8 Feb 11,2026 17:42 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-restart-rntfq ns-fsore Restart nebula-lfnhoq graphd,metad,storaged Succeed 8/8 Feb 11,2026 17:42 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-restart-rntfq --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-restart-rntfq patched  `kbcli cluster delete-ops --name nebula-lfnhoq-restart-rntfq --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-restart-rntfq deleted test failover connectionstress(B check cluster status before cluster-failover-connectionstress check cluster status done(B cluster_status:Running(B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-connectionstress-nebula-lfnhoq --namespace ns-fsore `(B   `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-nebula-lfnhoq namespace: ns-fsore spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local" - "--user" - "root" - "--password" - "015&uQ3fl&40k28!" - "--port" - "9669" - "--database" - "default" - "--dbtype" - "nebula" - "--test" - "connectionstress" - "--connections" - "300" - "--duration" - "60" restartPolicy: Never  `kubectl apply -f test-db-client-connectionstress-nebula-lfnhoq.yaml`(B  pod/test-db-client-connectionstress-nebula-lfnhoq created apply test-db-client-connectionstress-nebula-lfnhoq.yaml Success(B  `rm -rf test-db-client-connectionstress-nebula-lfnhoq.yaml`(B  check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 6s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 10s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 15s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 20s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 25s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 30s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 35s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 40s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 45s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 51s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 56s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 61s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 66s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 71s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 76s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 81s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 1/1 Running 0 86s(B check pod test-db-client-connectionstress-nebula-lfnhoq status done(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-lfnhoq 0/1 Completed 0 91s(B check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Running Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:42 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:42 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:43 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:42 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:42 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:43 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:42 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:42 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 09:47:24.292 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:47:24.292 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:143) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:64) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) 09:47:24.309 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:47:24.309 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:143) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:64) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) 09:47:24.322 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:47:24.322 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 09:47:24.333 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf Failed to connect to Nebula space: create session failed. 09:47:24.333 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Trying with space Nebula. CREATE SPACE default Successfully 09:47:34.365 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:47:34.366 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 09:47:34.374 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:47:34.374 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.4.34 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:143) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:64) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Releasing connections... Test Result: null Connection Information: Database Type: nebula Host: nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local Port: 9669 Database: default Table: User: root Org: Access Mode: mysql Test Type: connectionstress Connection Count: 300 Duration: 60 seconds  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-connectionstress-nebula-lfnhoq --namespace ns-fsore `(B  pod/test-db-client-connectionstress-nebula-lfnhoq patched (no change) pod "test-db-client-connectionstress-nebula-lfnhoq" force deleted check failover pod name failover pod name:nebula-lfnhoq-graphd-0 failover connectionstress Success(B check component metad exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=metad --namespace ns-fsore | (grep "metad" || true )`(B  cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart nebula-lfnhoq --auto-approve --force=true --components metad --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-restart-lllqw created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-restart-lllqw -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-restart-lllqw ns-fsore Restart nebula-lfnhoq metad Creating -/- Feb 11,2026 17:47 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:42 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:42 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:48 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:48 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:47 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:43 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:42 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:42 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-restart-lllqw ns-fsore Restart nebula-lfnhoq metad Succeed 3/3 Feb 11,2026 17:47 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-restart-lllqw ns-fsore Restart nebula-lfnhoq metad Succeed 3/3 Feb 11,2026 17:47 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-restart-lllqw --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-restart-lllqw patched  `kbcli cluster delete-ops --name nebula-lfnhoq-restart-lllqw --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-restart-lllqw deleted cluster stop check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster stop nebula-lfnhoq --auto-approve --force=true --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-stop-tdkxn created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-stop-tdkxn -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-stop-tdkxn ns-fsore Stop nebula-lfnhoq Creating -/- Feb 11,2026 17:49 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Stopping Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B check cluster status done(B cluster_status:Stopped(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-stop-tdkxn ns-fsore Stop nebula-lfnhoq graphd,metad,storaged Succeed 8/8 Feb 11,2026 17:49 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-stop-tdkxn ns-fsore Stop nebula-lfnhoq graphd,metad,storaged Succeed 8/8 Feb 11,2026 17:49 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-stop-tdkxn --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-stop-tdkxn patched  `kbcli cluster delete-ops --name nebula-lfnhoq-stop-tdkxn --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-stop-tdkxn deleted cluster start check cluster status before ops check cluster status done(B cluster_status:Stopped(B  `kbcli cluster start nebula-lfnhoq --force=true --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-start-hrhpn created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-start-hrhpn -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-start-hrhpn ns-fsore Start nebula-lfnhoq Creating -/- Feb 11,2026 17:50 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-start-hrhpn ns-fsore Start nebula-lfnhoq graphd,metad,storaged Succeed 8/8 Feb 11,2026 17:50 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-start-hrhpn ns-fsore Start nebula-lfnhoq graphd,metad,storaged Succeed 8/8 Feb 11,2026 17:50 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-start-hrhpn --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-start-hrhpn patched  `kbcli cluster delete-ops --name nebula-lfnhoq-start-hrhpn --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-start-hrhpn deleted check component metad exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=metad --namespace ns-fsore | (grep "metad" || true )`(B  check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale nebula-lfnhoq --auto-approve --force=true --components metad --cpu 200m --memory 0.6Gi --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-verticalscaling-g26xb created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-verticalscaling-g26xb -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-verticalscaling-g26xb ns-fsore VerticalScaling nebula-lfnhoq metad Creating -/- Feb 11,2026 17:51 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:51 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-verticalscaling-g26xb ns-fsore VerticalScaling nebula-lfnhoq metad Succeed 3/3 Feb 11,2026 17:51 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-verticalscaling-g26xb ns-fsore VerticalScaling nebula-lfnhoq metad Succeed 3/3 Feb 11,2026 17:51 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-verticalscaling-g26xb --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-verticalscaling-g26xb patched  `kbcli cluster delete-ops --name nebula-lfnhoq-verticalscaling-g26xb --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-verticalscaling-g26xb deleted check component metad exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=metad --namespace ns-fsore | (grep "metad" || true )`(B  check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=storaged --namespace ns-fsore | (grep "storaged" || true )`(B   `kubectl get pvc -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=metad,storaged,apps.kubeblocks.io/vct-name=data --namespace ns-fsore `(B  nebula-lfnhoq metad,storaged data pvc is empty cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand nebula-lfnhoq --auto-approve --force=true --components metad,storaged --volume-claim-templates data --storage 6Gi --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-volumeexpansion-vsqnp created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-volumeexpansion-vsqnp -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-volumeexpansion-vsqnp ns-fsore VolumeExpansion nebula-lfnhoq metad,storaged Creating -/- Feb 11,2026 17:52 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:51 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-volumeexpansion-vsqnp ns-fsore VolumeExpansion nebula-lfnhoq metad,storaged Succeed 6/6 Feb 11,2026 17:52 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-volumeexpansion-vsqnp ns-fsore VolumeExpansion nebula-lfnhoq metad,storaged Succeed 6/6 Feb 11,2026 17:52 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-volumeexpansion-vsqnp --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-volumeexpansion-vsqnp patched  `kbcli cluster delete-ops --name nebula-lfnhoq-volumeexpansion-vsqnp --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-volumeexpansion-vsqnp deleted check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=storaged --namespace ns-fsore | (grep "storaged" || true )`(B  cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart nebula-lfnhoq --auto-approve --force=true --components storaged --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-restart-g6sl4 created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-restart-g6sl4 -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-restart-g6sl4 ns-fsore Restart nebula-lfnhoq storaged Pending -/- Feb 11,2026 17:58 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:51 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:00 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:59 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 100m / 100m 512Mi / 512Mi data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:58 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-restart-g6sl4 ns-fsore Restart nebula-lfnhoq storaged Succeed 3/3 Feb 11,2026 17:58 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-restart-g6sl4 ns-fsore Restart nebula-lfnhoq storaged Succeed 3/3 Feb 11,2026 17:58 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-restart-g6sl4 --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-restart-g6sl4 patched  `kbcli cluster delete-ops --name nebula-lfnhoq-restart-g6sl4 --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-restart-g6sl4 deleted check component storaged exists  `kubectl get components -l app.kubernetes.io/instance=nebula-lfnhoq,apps.kubeblocks.io/component-name=storaged --namespace ns-fsore | (grep "storaged" || true )`(B  check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale nebula-lfnhoq --auto-approve --force=true --components storaged --cpu 200m --memory 0.6Gi --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-verticalscaling-qfx54 created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-verticalscaling-qfx54 -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-verticalscaling-qfx54 ns-fsore VerticalScaling nebula-lfnhoq storaged Creating -/- Feb 11,2026 18:00 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:50 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:51 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:02 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:01 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:00 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-verticalscaling-qfx54 ns-fsore VerticalScaling nebula-lfnhoq storaged Succeed 3/3 Feb 11,2026 18:00 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-verticalscaling-qfx54 ns-fsore VerticalScaling nebula-lfnhoq storaged Succeed 3/3 Feb 11,2026 18:00 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-verticalscaling-qfx54 --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-verticalscaling-qfx54 patched  `kbcli cluster delete-ops --name nebula-lfnhoq-verticalscaling-qfx54 --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-verticalscaling-qfx54 deleted cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart nebula-lfnhoq --auto-approve --force=true --components graphd --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-restart-9zvsm created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-restart-9zvsm -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-restart-9zvsm ns-fsore Restart nebula-lfnhoq graphd Running -/- Feb 11,2026 18:02 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:03 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 100m / 100m 512Mi / 512Mi logs:2Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:02 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:51 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:02 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:01 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:00 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-restart-9zvsm ns-fsore Restart nebula-lfnhoq graphd Succeed 2/2 Feb 11,2026 18:02 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-restart-9zvsm ns-fsore Restart nebula-lfnhoq graphd Succeed 2/2 Feb 11,2026 18:02 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-restart-9zvsm --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-restart-9zvsm patched  `kbcli cluster delete-ops --name nebula-lfnhoq-restart-9zvsm --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-restart-9zvsm deleted check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale nebula-lfnhoq --auto-approve --force=true --components graphd --cpu 200m --memory 0.6Gi --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-verticalscaling-x7jsm created successfully, you can view the progress: kbcli cluster describe-ops nebula-lfnhoq-verticalscaling-x7jsm -n ns-fsore check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-verticalscaling-x7jsm ns-fsore VerticalScaling nebula-lfnhoq graphd Creating -/- Feb 11,2026 18:04 UTC+0800 check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Updating Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:05 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:04 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:51 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:02 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:01 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:00 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops nebula-lfnhoq --status all --namespace ns-fsore `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-lfnhoq-verticalscaling-x7jsm ns-fsore VerticalScaling nebula-lfnhoq graphd Succeed 2/2 Feb 11,2026 18:04 UTC+0800 check ops status done(B ops_status:nebula-lfnhoq-verticalscaling-x7jsm ns-fsore VerticalScaling nebula-lfnhoq graphd Succeed 2/2 Feb 11,2026 18:04 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations nebula-lfnhoq-verticalscaling-x7jsm --namespace ns-fsore `(B  opsrequest.operations.kubeblocks.io/nebula-lfnhoq-verticalscaling-x7jsm patched  `kbcli cluster delete-ops --name nebula-lfnhoq-verticalscaling-x7jsm --force --auto-approve --namespace ns-fsore `(B  OpsRequest nebula-lfnhoq-verticalscaling-x7jsm deleted cluster update terminationPolicy WipeOut  `kbcli cluster update nebula-lfnhoq --termination-policy=WipeOut --namespace ns-fsore `(B  cluster.apps.kubeblocks.io/nebula-lfnhoq updated (no change) check cluster status  `kbcli cluster list nebula-lfnhoq --show-labels --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-lfnhoq ns-fsore nebula WipeOut Running Feb 11,2026 17:29 UTC+0800 app.kubernetes.io/instance=nebula-lfnhoq,clusterdefinition.kubeblocks.io/name=nebula check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances nebula-lfnhoq --namespace ns-fsore `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-lfnhoq-graphd-0 ns-fsore nebula-lfnhoq graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:2Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:05 UTC+0800 nebula-lfnhoq-graphd-1 ns-fsore nebula-lfnhoq graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:2Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:04 UTC+0800 nebula-lfnhoq-metad-0 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-1 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 17:52 UTC+0800 logs:2Gi nebula-lfnhoq-metad-2 ns-fsore nebula-lfnhoq metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 17:51 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-0 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000003/10.224.0.5 Feb 11,2026 18:02 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-1 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000002/10.224.0.9 Feb 11,2026 18:01 UTC+0800 logs:2Gi nebula-lfnhoq-storaged-2 ns-fsore nebula-lfnhoq storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-55976491-vmss000001/10.224.0.7 Feb 11,2026 18:00 UTC+0800 logs:2Gi check pod status done(B  `kubectl get secrets -l app.kubernetes.io/instance=nebula-lfnhoq`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets nebula-lfnhoq-graphd-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:015&uQ3fl&40k28!;DB_PORT:9669;DB_DATABASE:default(B check cluster connect  `echo "/usr/local/nebula/console/nebula-console --addr nebula-lfnhoq-graphd.ns-fsore.svc.cluster.local --user root --password '015&uQ3fl&40k28!' --port 9669" | kubectl exec -it nebula-lfnhoq-storaged-0 --namespace ns-fsore -- bash`(B  check cluster connect done(B cluster list-logs  `kbcli cluster list-logs nebula-lfnhoq --namespace ns-fsore `(B  cluster logs  `kbcli cluster logs nebula-lfnhoq --tail 30 --namespace ns-fsore `(B  I20260211 10:06:11.376725 1 GraphSessionManager.cpp:337] Total of 17 sessions are loaded I20260211 10:06:11.378237 1 Snowflake.cpp:17] WorkerId init success: 2 I20260211 10:06:11.380280 64 GraphServer.cpp:63] Starting nebula-graphd on nebula-lfnhoq-graphd-0.nebula-lfnhoq-graphd-headless.ns-fsore.svc.cluster.local:9669 ==> /usr/local/nebula/logs/nebula-graphd.WARNING <== E20260211 09:50:51.131716 1 MetaClient.cpp:112] Heartbeat failed, status:RPC failure in MetaClient: apache::thrift::transport::TTransportException: Connection not open: apache::thrift::transport::TTransportException: AsyncSocketException: setReadCallback() called with socket in invalid state, type = Socket not open E20260211 09:52:05.374269 55 MetaClient.cpp:772] Send request to "nebula-lfnhoq-metad-1.nebula-lfnhoq-metad-headless.ns-fsore.svc.cluster.local":9559, exceed retry limit E20260211 09:52:05.374305 55 MetaClient.cpp:773] RpcResponse exception: apache::thrift::transport::TTransportException: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 113 (No route to host) E20260211 09:52:05.374408 63 GraphSessionManager.cpp:290] Update sessions failed: RPC failure in MetaClient: apache::thrift::transport::TTransportException: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 113 (No route to host) E20260211 09:52:05.447228 62 MetaClient.cpp:192] Heartbeat failed, status:LeaderChanged: Leader changed! E20260211 09:59:09.380946 34 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-2.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 E20260211 09:59:43.984081 35 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-1.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 E20260211 10:00:24.446367 32 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-0.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 E20260211 10:01:04.588438 35 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-2.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 E20260211 10:02:15.650298 32 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-0.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 ==> /usr/local/nebula/logs/nebula-graphd.ERROR <== E20260211 09:50:51.131716 1 MetaClient.cpp:112] Heartbeat failed, status:RPC failure in MetaClient: apache::thrift::transport::TTransportException: Connection not open: apache::thrift::transport::TTransportException: AsyncSocketException: setReadCallback() called with socket in invalid state, type = Socket not open E20260211 09:52:05.374269 55 MetaClient.cpp:772] Send request to "nebula-lfnhoq-metad-1.nebula-lfnhoq-metad-headless.ns-fsore.svc.cluster.local":9559, exceed retry limit E20260211 09:52:05.374305 55 MetaClient.cpp:773] RpcResponse exception: apache::thrift::transport::TTransportException: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 113 (No route to host) E20260211 09:52:05.374408 63 GraphSessionManager.cpp:290] Update sessions failed: RPC failure in MetaClient: apache::thrift::transport::TTransportException: Failed to write to remote endpoint. Wrote 0 bytes. AsyncSocketException: AsyncSocketException: connect failed, type = Socket not open, errno = 113 (No route to host) E20260211 09:52:05.447228 62 MetaClient.cpp:192] Heartbeat failed, status:LeaderChanged: Leader changed! E20260211 09:59:09.380946 34 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-2.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 E20260211 09:59:43.984081 35 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-1.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 E20260211 10:00:24.446367 32 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-0.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 E20260211 10:01:04.588438 35 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-2.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 E20260211 10:02:15.650298 32 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-lfnhoq-storaged-0.nebula-lfnhoq-storaged-headless.ns-fsore.svc.cluster.local":9779 ==> /usr/local/nebula/logs/nebula-graphd.INFO <== I20260211 10:06:39.715818 36 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.3.212]:51208 delete cluster nebula-lfnhoq  `kbcli cluster delete nebula-lfnhoq --auto-approve --namespace ns-fsore `(B  pod_info:nebula-lfnhoq-graphd-0 5/5 Running 0 44s nebula-lfnhoq-graphd-1 5/5 Running 0 104s nebula-lfnhoq-metad-0 4/4 Running 0 14m nebula-lfnhoq-metad-1 4/4 Running 0 14m nebula-lfnhoq-metad-2 4/4 Running 0 15m nebula-lfnhoq-storaged-0 5/5 Running 0 4m38s nebula-lfnhoq-storaged-1 5/5 Running 0 5m10s nebula-lfnhoq-storaged-2 5/5 Running 0 5m43s Cluster nebula-lfnhoq deleted delete cluster pod done(B check cluster resource non-exist OK: pvc(B delete cluster done(B Nebula Test Suite All Done!(B Test Engine: nebula Test Type: 12 --------------------------------------Nebula (Topology = default Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[Topology=default;ComponentDefinition=nebula-graphd-1.0.1;ComponentVersion=nebula;ServiceVersion=v3.5.0;]|[Description=Create a cluster with the specified topology default with the specified component definition nebula-graphd-1.0.1 and component version nebula and service version v3.5.0](B [PASSED]|[Connect]|[ComponentName=graphd]|[Description=Connect to the cluster](B [PASSED]|[VolumeExpansion]|[ComponentName=graphd,metad,storaged;ComponentVolume=logs]|[Description=VolumeExpansion the cluster specify component graphd,metad,storaged and volume logs](B [PASSED]|[HorizontalScaling Out]|[ComponentName=graphd]|[Description=HorizontalScaling Out the cluster specify component graphd](B [PASSED]|[HorizontalScaling In]|[ComponentName=graphd]|[Description=HorizontalScaling In the cluster specify component graphd](B [PASSED]|[HorizontalScaling Out]|[ComponentName=storaged]|[Description=HorizontalScaling Out the cluster specify component storaged](B [PASSED]|[HorizontalScaling In]|[ComponentName=storaged]|[Description=HorizontalScaling In the cluster specify component storaged](B [PASSED]|[Restart]|[-]|[Description=Restart the cluster](B [PASSED]|[NoFailover]|[HA=Connection Stress;ComponentName=graphd]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.](B [PASSED]|[Restart]|[ComponentName=metad]|[Description=Restart the cluster specify component metad](B [PASSED]|[Stop]|[-]|[Description=Stop the cluster](B [PASSED]|[Start]|[-]|[Description=Start the cluster](B [PASSED]|[VerticalScaling]|[ComponentName=metad]|[Description=VerticalScaling the cluster specify component metad](B [PASSED]|[VolumeExpansion]|[ComponentName=metad;ComponentVolume=data]|[Description=VolumeExpansion the cluster specify component metad and volume data](B [PASSED]|[Restart]|[ComponentName=storaged]|[Description=Restart the cluster specify component storaged](B [PASSED]|[VerticalScaling]|[ComponentName=storaged]|[Description=VerticalScaling the cluster specify component storaged](B [PASSED]|[Restart]|[ComponentName=graphd]|[Description=Restart the cluster specify component graphd](B [PASSED]|[VerticalScaling]|[ComponentName=graphd]|[Description=VerticalScaling the cluster specify component graphd](B [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut](B [PASSED]|[Delete]|[-]|[Description=Delete the cluster](B [END]