source commons files source engines files source kubeblocks files source kubedb files CLUSTER_NAME: `kubectl get namespace | grep ns-dacvd ` `kubectl create namespace ns-dacvd` namespace/ns-dacvd created create namespace ns-dacvd done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.1` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 33.6M 100 33.6M 0 0 147M 0 --:--:-- --:--:-- --:--:-- 147M kbcli installed successfully. Kubernetes: v1.32.6 KubeBlocks: 1.0.1 kbcli: 1.0.1 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.1 done Kubernetes: v1.32.6 KubeBlocks: 1.0.1 kbcli: 1.0.1 Kubernetes Env: v1.32.6 check snapshot controller check snapshot controller done POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default KubeBlocks version is:1.0.1 skip upgrade KubeBlocks current KubeBlocks version: 1.0.1 Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:graphd set component version set component version:nebula set service versions:v3.8.0,v3.5.0 set service versions sorted:v3.5.0,v3.8.0 set nebula component definition set nebula component definition nebula-storaged-1.0.1 REPORT_COUNT 0:0 set replicas first:2,v3.5.0|2,v3.8.0 set replicas third:2,v3.5.0 set replicas fourth:2,v3.5.0 set minimum cmpv service version set minimum cmpv service version replicas:2,v3.5.0 REPORT_COUNT:1 CLUSTER_TOPOLOGY:default topology default found in cluster definition nebula set nebula component definition set nebula component definition nebula-metad-1.0.1 LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 1 CLUSTER_NAME:nebula-tdmfim No resources found in ns-dacvd namespace. pod_info: termination_policy:Delete create 2 replica Delete nebula cluster check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: nebula-graphd-1.0.1 by component version:nebula apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: nebula-tdmfim namespace: ns-dacvd spec: clusterDef: nebula topology: default terminationPolicy: Delete componentSpecs: - name: graphd serviceVersion: v3.5.0 replicas: 2 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: metad serviceVersion: v3.5.0 replicas: 3 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: storaged serviceVersion: v3.5.0 replicas: 3 env: - name: DEFAULT_TIMEZONE value: UTC+00:00:00 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: logs spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi `kubectl apply -f test_create_nebula-tdmfim.yaml` cluster.apps.kubeblocks.io/nebula-tdmfim created apply test_create_nebula-tdmfim.yaml Success `rm -rf test_create_nebula-tdmfim.yaml` check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Creating Sep 11,2025 17:21 UTC+0800 clusterdefinition.kubeblocks.io/name=nebula cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 logs:1Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 logs:1Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 logs:1Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 logs:1Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 logs:1Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:23 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check pod nebula-tdmfim-graphd-0 container_name graphd exist password K36od9t63Y9!#17# check pod nebula-tdmfim-graphd-0 container_name agent exist password K36od9t63Y9!#17# check pod nebula-tdmfim-graphd-0 container_name exporter exist password K36od9t63Y9!#17# check pod nebula-tdmfim-graphd-0 container_name kbagent exist password K36od9t63Y9!#17# check pod nebula-tdmfim-graphd-0 container_name config-manager exist password K36od9t63Y9!#17# No container logs contain secret password. describe cluster `kbcli cluster describe nebula-tdmfim --namespace ns-dacvd ` Name: nebula-tdmfim Created Time: Sep 11,2025 17:21 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-dacvd nebula default Running Delete Endpoints: COMPONENT INTERNAL EXTERNAL graphd nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local:9669 nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local:19669 nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local:19670 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME graphd v3.5.0 nebula-tdmfim-graphd-0 Running 0 aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 graphd v3.5.0 nebula-tdmfim-graphd-1 Running 0 aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 metad v3.5.0 nebula-tdmfim-metad-0 Running 0 aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 metad v3.5.0 nebula-tdmfim-metad-1 Running 0 aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 metad v3.5.0 nebula-tdmfim-metad-2 Running 0 aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 storaged v3.5.0 nebula-tdmfim-storaged-0 Running 0 aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 storaged v3.5.0 nebula-tdmfim-storaged-1 Running 0 aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 storaged v3.5.0 nebula-tdmfim-storaged-2 Running 0 aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:23 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS graphd 100m / 100m 512Mi / 512Mi logs:1Gi default metad 100m / 100m 512Mi / 512Mi data:1Gi default logs:1Gi default storaged 100m / 100m 512Mi / 512Mi data:1Gi default logs:1Gi default Images: COMPONENT COMPONENT-DEFINITION IMAGE graphd nebula-graphd-1.0.1 docker.io/apecloud/nebula-graphd:v3.5.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/nebula-console:v3.8.0 docker.io/apecloud/kubeblocks-tools:0.9.4 metad nebula-metad-1.0.1 docker.io/apecloud/nebula-metad:v3.5.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/kubeblocks-tools:0.9.4 storaged nebula-storaged-1.0.1 docker.io/apecloud/nebula-storaged:v3.5.0 docker.io/apecloud/nebula-agent:3.7.1 docker.io/apecloud/nebula-stats-exporter:v3.8.0 docker.io/apecloud/nebula-tool:1.0.0 docker.io/apecloud/kubeblocks-tools:0.9.4 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-dacvd nebula-tdmfim `kbcli cluster label nebula-tdmfim app.kubernetes.io/instance- --namespace ns-dacvd ` label "app.kubernetes.io/instance" not found. `kbcli cluster label nebula-tdmfim app.kubernetes.io/instance=nebula-tdmfim --namespace ns-dacvd ` `kbcli cluster label nebula-tdmfim --list --namespace ns-dacvd ` NAME NAMESPACE LABELS nebula-tdmfim ns-dacvd app.kubernetes.io/instance=nebula-tdmfim clusterdefinition.kubeblocks.io/name=nebula label cluster app.kubernetes.io/instance=nebula-tdmfim Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=nebula-tdmfim --namespace ns-dacvd ` `kbcli cluster label nebula-tdmfim --list --namespace ns-dacvd ` NAME NAMESPACE LABELS nebula-tdmfim ns-dacvd app.kubernetes.io/instance=nebula-tdmfim case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=nebula label cluster case.name=kbcli.test1 Success `kbcli cluster label nebula-tdmfim case.name=kbcli.test2 --overwrite --namespace ns-dacvd ` `kbcli cluster label nebula-tdmfim --list --namespace ns-dacvd ` NAME NAMESPACE LABELS nebula-tdmfim ns-dacvd app.kubernetes.io/instance=nebula-tdmfim case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=nebula label cluster case.name=kbcli.test2 Success `kbcli cluster label nebula-tdmfim case.name- --namespace ns-dacvd ` `kbcli cluster label nebula-tdmfim --list --namespace ns-dacvd ` NAME NAMESPACE LABELS nebula-tdmfim ns-dacvd app.kubernetes.io/instance=nebula-tdmfim clusterdefinition.kubeblocks.io/name=nebula delete cluster label case.name Success cluster connect `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default `echo "echo \"SHOW HOSTS;\" | /usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash ` Defaulted container "storaged" out of: storaged, agent, exporter, kbagent, config-manager, init-console (init), init-agent (init), init-kbagent (init), kbagent-worker (init), install-config-manager-tool (init) Unable to use a TTY - input is not a terminal or the right kind of file Welcome! (root@nebula) [(none)]> +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ | Host | Port | Status | Leader count | Leader distribution | Partition distribution | Version | +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ | "nebula-tdmfim-storaged-0.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.5.0" | | "nebula-tdmfim-storaged-1.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.5.0" | | "nebula-tdmfim-storaged-2.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" | "3.5.0" | +---------------------------------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+---------+ Got 3 rows (time spent 671µs/1.084815ms) Thu, 11 Sep 2025 09:25:20 UTC (root@nebula) [(none)]> Bye root! Thu, 11 Sep 2025 09:25:20 UTC connect cluster Success test failover connectionstress check cluster status before cluster-failover-connectionstress check cluster status done cluster_status:Running Error from server (NotFound): pods "test-db-client-connectionstress-nebula-tdmfim" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-nebula-tdmfim --namespace ns-dacvd ` Error from server (NotFound): pods "test-db-client-connectionstress-nebula-tdmfim" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-nebula-tdmfim" not found `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-nebula-tdmfim namespace: ns-dacvd spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local" - "--user" - "root" - "--password" - "K36od9t63Y9!#17#" - "--port" - "9669" - "--database" - "default" - "--dbtype" - "nebula" - "--test" - "connectionstress" - "--connections" - "300" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-nebula-tdmfim.yaml` pod/test-db-client-connectionstress-nebula-tdmfim created apply test-db-client-connectionstress-nebula-tdmfim.yaml Success `rm -rf test-db-client-connectionstress-nebula-tdmfim.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 5s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 9s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 15s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 20s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 25s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 31s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 36s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 41s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 46s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 52s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 57s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 62s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 68s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 1/1 Running 0 73s check pod test-db-client-connectionstress-nebula-tdmfim status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-nebula-tdmfim 0/1 Completed 0 78s check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:21 UTC+0800 logs:1Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:21 UTC+0800 logs:1Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:21 UTC+0800 logs:1Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 logs:1Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 logs:1Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:23 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done 09:26:41.687 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:26:41.687 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:143) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:64) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) 09:26:41.699 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:26:41.700 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:143) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:64) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) 09:26:41.708 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:26:41.708 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more 09:26:41.710 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:26:41.710 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:143) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:64) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) 09:26:41.726 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:26:41.726 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. Failed to connect to Nebula space: create session failed. Trying with space Nebula. com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.net.NebulaPool.getSession(NebulaPool.java:143) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:64) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) 09:26:41.733 [main] ERROR c.v.nebula.client.graph.SessionPool - Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf 09:26:41.734 [main] ERROR c.v.nebula.client.graph.SessionPool - SessionPool init failed. java.io.IOException: Failed to connect to Nebula space: create session failed. at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:76) at com.apecloud.dbtester.tester.NebulaTester.connectionStress(NebulaTester.java:126) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: java.lang.RuntimeException: create session failed. at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:130) at com.vesoft.nebula.client.graph.SessionPool.(SessionPool.java:76) at com.apecloud.dbtester.tester.NebulaTester.connect(NebulaTester.java:73) ... 4 more Caused by: com.vesoft.nebula.client.graph.exception.AuthFailedException: Auth failed: Create Session failed: Too many sessions created from 10.244.5.2 by user root. the threshold is 300. You can change it by modifying 'max_sessions_per_ip_per_user' in nebula-graphd.conf at com.vesoft.nebula.client.graph.net.SyncConnection.authenticate(SyncConnection.java:224) at com.vesoft.nebula.client.graph.SessionPool.createSessionObject(SessionPool.java:485) at com.vesoft.nebula.client.graph.SessionPool.init(SessionPool.java:126) ... 6 more Releasing connections... Test Result: null Connection Information: Database Type: nebula Host: nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local Port: 9669 Database: default Table: User: root Org: Access Mode: mysql Test Type: connectionstress Connection Count: 300 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-nebula-tdmfim --namespace ns-dacvd ` pod/test-db-client-connectionstress-nebula-tdmfim patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-nebula-tdmfim" force deleted check failover pod name failover pod name:nebula-tdmfim-graphd-0 failover connectionstress Success check component metad exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=metad --namespace ns-dacvd | (grep "metad" || true )` check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale nebula-tdmfim --auto-approve --force=true --components metad --cpu 200m --memory 0.6Gi --namespace ns-dacvd ` OpsRequest nebula-tdmfim-verticalscaling-htplk created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-verticalscaling-htplk -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-verticalscaling-htplk ns-dacvd VerticalScaling nebula-tdmfim metad Running 0/3 Sep 11,2025 17:26 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:28 UTC+0800 logs:1Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:27 UTC+0800 logs:1Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:26 UTC+0800 logs:1Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 logs:1Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 logs:1Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 100m / 100m 512Mi / 512Mi data:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:23 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-verticalscaling-htplk ns-dacvd VerticalScaling nebula-tdmfim metad Succeed 3/3 Sep 11,2025 17:26 UTC+0800 check ops status done ops_status:nebula-tdmfim-verticalscaling-htplk ns-dacvd VerticalScaling nebula-tdmfim metad Succeed 3/3 Sep 11,2025 17:26 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-verticalscaling-htplk --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-verticalscaling-htplk patched `kbcli cluster delete-ops --name nebula-tdmfim-verticalscaling-htplk --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-verticalscaling-htplk deleted check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=storaged --namespace ns-dacvd | (grep "storaged" || true )` check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale nebula-tdmfim --auto-approve --force=true --components storaged --cpu 200m --memory 0.6Gi --namespace ns-dacvd ` OpsRequest nebula-tdmfim-verticalscaling-f9bpz created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-verticalscaling-f9bpz -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-verticalscaling-f9bpz ns-dacvd VerticalScaling nebula-tdmfim storaged Running 0/3 Sep 11,2025 17:28 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:28 UTC+0800 logs:1Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:27 UTC+0800 logs:1Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:26 UTC+0800 logs:1Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:34 UTC+0800 logs:1Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:29 UTC+0800 logs:1Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:28 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-verticalscaling-f9bpz ns-dacvd VerticalScaling nebula-tdmfim storaged Succeed 3/3 Sep 11,2025 17:28 UTC+0800 check ops status done ops_status:nebula-tdmfim-verticalscaling-f9bpz ns-dacvd VerticalScaling nebula-tdmfim storaged Succeed 3/3 Sep 11,2025 17:28 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-verticalscaling-f9bpz --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-verticalscaling-f9bpz patched `kbcli cluster delete-ops --name nebula-tdmfim-verticalscaling-f9bpz --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-verticalscaling-f9bpz deleted check component metad exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=metad --namespace ns-dacvd | (grep "metad" || true )` check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=storaged --namespace ns-dacvd | (grep "storaged" || true )` `kubectl get pvc -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=metad,storaged,apps.kubeblocks.io/vct-name=data --namespace ns-dacvd ` No resources found in ns-dacvd namespace. nebula-tdmfim metad,storaged data pvc is empty cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-tdmfim namespace. `kbcli cluster volume-expand nebula-tdmfim --auto-approve --force=true --components metad,storaged --volume-claim-templates data --storage 6Gi --namespace ns-dacvd ` OpsRequest nebula-tdmfim-volumeexpansion-49qgx created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-volumeexpansion-49qgx -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-volumeexpansion-49qgx ns-dacvd VolumeExpansion nebula-tdmfim metad,storaged Running 0/6 Sep 11,2025 17:34 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:23 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:28 UTC+0800 logs:1Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:27 UTC+0800 logs:1Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:26 UTC+0800 logs:1Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:34 UTC+0800 logs:1Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:29 UTC+0800 logs:1Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:28 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done No resources found in nebula-tdmfim namespace. check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-volumeexpansion-49qgx ns-dacvd VolumeExpansion nebula-tdmfim metad,storaged Succeed 6/6 Sep 11,2025 17:34 UTC+0800 check ops status done ops_status:nebula-tdmfim-volumeexpansion-49qgx ns-dacvd VolumeExpansion nebula-tdmfim metad,storaged Succeed 6/6 Sep 11,2025 17:34 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-volumeexpansion-49qgx --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-volumeexpansion-49qgx patched `kbcli cluster delete-ops --name nebula-tdmfim-volumeexpansion-49qgx --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-volumeexpansion-49qgx deleted cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart nebula-tdmfim --auto-approve --force=true --namespace ns-dacvd ` OpsRequest nebula-tdmfim-restart-6nrp9 created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-restart-6nrp9 -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-restart-6nrp9 ns-dacvd Restart nebula-tdmfim graphd,metad,storaged Running 0/8 Sep 11,2025 17:54 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:55 UTC+0800 logs:1Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:55 UTC+0800 logs:1Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:54 UTC+0800 logs:1Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:56 UTC+0800 logs:1Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 logs:1Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-restart-6nrp9 ns-dacvd Restart nebula-tdmfim graphd,metad,storaged Succeed 8/8 Sep 11,2025 17:54 UTC+0800 check ops status done ops_status:nebula-tdmfim-restart-6nrp9 ns-dacvd Restart nebula-tdmfim graphd,metad,storaged Succeed 8/8 Sep 11,2025 17:54 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-restart-6nrp9 --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-restart-6nrp9 patched `kbcli cluster delete-ops --name nebula-tdmfim-restart-6nrp9 --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-restart-6nrp9 deleted check component metad exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=metad --namespace ns-dacvd | (grep "metad" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart nebula-tdmfim --auto-approve --force=true --components metad --namespace ns-dacvd ` OpsRequest nebula-tdmfim-restart-tpv9z created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-restart-tpv9z -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-restart-tpv9z ns-dacvd Restart nebula-tdmfim metad Running 0/3 Sep 11,2025 17:56 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:1Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:1Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:1Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:1Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:56 UTC+0800 logs:1Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 logs:1Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 logs:1Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-restart-tpv9z ns-dacvd Restart nebula-tdmfim metad Succeed 3/3 Sep 11,2025 17:56 UTC+0800 check ops status done ops_status:nebula-tdmfim-restart-tpv9z ns-dacvd Restart nebula-tdmfim metad Succeed 3/3 Sep 11,2025 17:56 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-restart-tpv9z --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-restart-tpv9z patched `kbcli cluster delete-ops --name nebula-tdmfim-restart-tpv9z --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-restart-tpv9z deleted check component graphd exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=graphd --namespace ns-dacvd | (grep "graphd" || true )` check component metad exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=metad --namespace ns-dacvd | (grep "metad" || true )` check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=storaged --namespace ns-dacvd | (grep "storaged" || true )` `kubectl get pvc -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=graphd,metad,storaged,apps.kubeblocks.io/vct-name=logs --namespace ns-dacvd ` No resources found in ns-dacvd namespace. nebula-tdmfim graphd,metad,storaged logs pvc is empty cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-tdmfim namespace. `kbcli cluster volume-expand nebula-tdmfim --auto-approve --force=true --components graphd,metad,storaged --volume-claim-templates logs --storage 10Gi --namespace ns-dacvd ` OpsRequest nebula-tdmfim-volumeexpansion-vdlgr created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-volumeexpansion-vdlgr -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-volumeexpansion-vdlgr ns-dacvd VolumeExpansion nebula-tdmfim graphd,metad,storaged Running 0/8 Sep 11,2025 17:58 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:10Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done No resources found in nebula-tdmfim namespace. check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-volumeexpansion-vdlgr ns-dacvd VolumeExpansion nebula-tdmfim graphd,metad,storaged Succeed 8/8 Sep 11,2025 17:58 UTC+0800 check ops status done ops_status:nebula-tdmfim-volumeexpansion-vdlgr ns-dacvd VolumeExpansion nebula-tdmfim graphd,metad,storaged Succeed 8/8 Sep 11,2025 17:58 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-volumeexpansion-vdlgr --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-volumeexpansion-vdlgr patched `kbcli cluster delete-ops --name nebula-tdmfim-volumeexpansion-vdlgr --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-volumeexpansion-vdlgr deleted cluster graphd scale-out cluster graphd scale-out replicas: 3 check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-tdmfim namespace. `kbcli cluster scale-out nebula-tdmfim --auto-approve --force=true --components graphd --replicas 1 --namespace ns-dacvd ` OpsRequest nebula-tdmfim-horizontalscaling-ptqsm created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-horizontalscaling-ptqsm -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-horizontalscaling-ptqsm ns-dacvd HorizontalScaling nebula-tdmfim graphd Running 0/1 Sep 11,2025 18:15 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:10Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 nebula-tdmfim-graphd-2 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:10Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:15 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done No resources found in nebula-tdmfim namespace. check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-horizontalscaling-ptqsm ns-dacvd HorizontalScaling nebula-tdmfim graphd Succeed 1/1 Sep 11,2025 18:15 UTC+0800 check ops status done ops_status:nebula-tdmfim-horizontalscaling-ptqsm ns-dacvd HorizontalScaling nebula-tdmfim graphd Succeed 1/1 Sep 11,2025 18:15 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-horizontalscaling-ptqsm --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-horizontalscaling-ptqsm patched `kbcli cluster delete-ops --name nebula-tdmfim-horizontalscaling-ptqsm --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-horizontalscaling-ptqsm deleted cluster graphd scale-in cluster graphd scale-in replicas: 2 check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-tdmfim namespace. `kbcli cluster scale-in nebula-tdmfim --auto-approve --force=true --components graphd --replicas 1 --namespace ns-dacvd ` OpsRequest nebula-tdmfim-horizontalscaling-zndj8 created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-horizontalscaling-zndj8 -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-horizontalscaling-zndj8 ns-dacvd HorizontalScaling nebula-tdmfim graphd Running 0/1 Sep 11,2025 18:16 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:10Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 100m / 100m 512Mi / 512Mi logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done No resources found in nebula-tdmfim namespace. check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-horizontalscaling-zndj8 ns-dacvd HorizontalScaling nebula-tdmfim graphd Succeed 1/1 Sep 11,2025 18:16 UTC+0800 check ops status done ops_status:nebula-tdmfim-horizontalscaling-zndj8 ns-dacvd HorizontalScaling nebula-tdmfim graphd Succeed 1/1 Sep 11,2025 18:16 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-horizontalscaling-zndj8 --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-horizontalscaling-zndj8 patched `kbcli cluster delete-ops --name nebula-tdmfim-horizontalscaling-zndj8 --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-horizontalscaling-zndj8 deleted check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale nebula-tdmfim --auto-approve --force=true --components graphd --cpu 200m --memory 0.6Gi --namespace ns-dacvd ` OpsRequest nebula-tdmfim-verticalscaling-sgdtl created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-verticalscaling-sgdtl -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-verticalscaling-sgdtl ns-dacvd VerticalScaling nebula-tdmfim graphd Running 0/2 Sep 11,2025 18:17 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:17 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:17 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-verticalscaling-sgdtl ns-dacvd VerticalScaling nebula-tdmfim graphd Succeed 2/2 Sep 11,2025 18:17 UTC+0800 check ops status done ops_status:nebula-tdmfim-verticalscaling-sgdtl ns-dacvd VerticalScaling nebula-tdmfim graphd Succeed 2/2 Sep 11,2025 18:17 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-verticalscaling-sgdtl --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-verticalscaling-sgdtl patched `kbcli cluster delete-ops --name nebula-tdmfim-verticalscaling-sgdtl --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-verticalscaling-sgdtl deleted cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart nebula-tdmfim --auto-approve --force=true --components graphd --namespace ns-dacvd ` OpsRequest nebula-tdmfim-restart-xpsmb created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-restart-xpsmb -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-restart-xpsmb ns-dacvd Restart nebula-tdmfim graphd Running 0/2 Sep 11,2025 18:18 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:19 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:18 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:55 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:54 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-restart-xpsmb ns-dacvd Restart nebula-tdmfim graphd Succeed 2/2 Sep 11,2025 18:18 UTC+0800 check ops status done ops_status:nebula-tdmfim-restart-xpsmb ns-dacvd Restart nebula-tdmfim graphd Succeed 2/2 Sep 11,2025 18:18 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-restart-xpsmb --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-restart-xpsmb patched `kbcli cluster delete-ops --name nebula-tdmfim-restart-xpsmb --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-restart-xpsmb deleted check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=storaged --namespace ns-dacvd | (grep "storaged" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart nebula-tdmfim --auto-approve --force=true --components storaged --namespace ns-dacvd ` OpsRequest nebula-tdmfim-restart-b58p6 created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-restart-b58p6 -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-restart-b58p6 ns-dacvd Restart nebula-tdmfim storaged Running 0/3 Sep 11,2025 18:20 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:19 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:18 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:21 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:20 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:20 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-restart-b58p6 ns-dacvd Restart nebula-tdmfim storaged Succeed 3/3 Sep 11,2025 18:20 UTC+0800 check ops status done ops_status:nebula-tdmfim-restart-b58p6 ns-dacvd Restart nebula-tdmfim storaged Succeed 3/3 Sep 11,2025 18:20 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-restart-b58p6 --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-restart-b58p6 patched `kbcli cluster delete-ops --name nebula-tdmfim-restart-b58p6 --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-restart-b58p6 deleted check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=storaged --namespace ns-dacvd | (grep "storaged" || true )` cluster storaged scale-out check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-tdmfim namespace. `kbcli cluster scale-out nebula-tdmfim --auto-approve --force=true --components storaged --replicas 1 --namespace ns-dacvd ` OpsRequest nebula-tdmfim-horizontalscaling-dphxp created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-horizontalscaling-dphxp -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-horizontalscaling-dphxp ns-dacvd HorizontalScaling nebula-tdmfim storaged Running 0/1 Sep 11,2025 18:22 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:19 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:18 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:21 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:20 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:20 UTC+0800 logs:10Gi nebula-tdmfim-storaged-3 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:22 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done No resources found in nebula-tdmfim namespace. check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-horizontalscaling-dphxp ns-dacvd HorizontalScaling nebula-tdmfim storaged Succeed 1/1 Sep 11,2025 18:22 UTC+0800 check ops status done ops_status:nebula-tdmfim-horizontalscaling-dphxp ns-dacvd HorizontalScaling nebula-tdmfim storaged Succeed 1/1 Sep 11,2025 18:22 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-horizontalscaling-dphxp --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-horizontalscaling-dphxp patched `kbcli cluster delete-ops --name nebula-tdmfim-horizontalscaling-dphxp --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-horizontalscaling-dphxp deleted check component storaged exists `kubectl get components -l app.kubernetes.io/instance=nebula-tdmfim,apps.kubeblocks.io/component-name=storaged --namespace ns-dacvd | (grep "storaged" || true )` cluster storaged scale-in check cluster status before ops check cluster status done cluster_status:Running No resources found in nebula-tdmfim namespace. `kbcli cluster scale-in nebula-tdmfim --auto-approve --force=true --components storaged --replicas 1 --namespace ns-dacvd ` OpsRequest nebula-tdmfim-horizontalscaling-t85dd created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-horizontalscaling-t85dd -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-horizontalscaling-t85dd ns-dacvd HorizontalScaling nebula-tdmfim storaged Running 0/1 Sep 11,2025 18:25 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:19 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:18 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 17:58 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 17:57 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 17:56 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:21 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:20 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:20 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done No resources found in nebula-tdmfim namespace. check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-horizontalscaling-t85dd ns-dacvd HorizontalScaling nebula-tdmfim storaged Succeed 1/1 Sep 11,2025 18:25 UTC+0800 check ops status done ops_status:nebula-tdmfim-horizontalscaling-t85dd ns-dacvd HorizontalScaling nebula-tdmfim storaged Succeed 1/1 Sep 11,2025 18:25 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-horizontalscaling-t85dd --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-horizontalscaling-t85dd patched `kbcli cluster delete-ops --name nebula-tdmfim-horizontalscaling-t85dd --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-horizontalscaling-t85dd deleted cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop nebula-tdmfim --auto-approve --force=true --namespace ns-dacvd ` OpsRequest nebula-tdmfim-stop-qkkr6 created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-stop-qkkr6 -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-stop-qkkr6 ns-dacvd Stop nebula-tdmfim graphd,metad,storaged Running 0/8 Sep 11,2025 18:25 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Stopped Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-stop-qkkr6 ns-dacvd Stop nebula-tdmfim graphd,metad,storaged Succeed 8/8 Sep 11,2025 18:25 UTC+0800 check ops status done ops_status:nebula-tdmfim-stop-qkkr6 ns-dacvd Stop nebula-tdmfim graphd,metad,storaged Succeed 8/8 Sep 11,2025 18:25 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-stop-qkkr6 --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-stop-qkkr6 patched `kbcli cluster delete-ops --name nebula-tdmfim-stop-qkkr6 --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-stop-qkkr6 deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start nebula-tdmfim --force=true --namespace ns-dacvd ` OpsRequest nebula-tdmfim-start-mwkps created successfully, you can view the progress: kbcli cluster describe-ops nebula-tdmfim-start-mwkps -n ns-dacvd check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-start-mwkps ns-dacvd Start nebula-tdmfim graphd,metad,storaged Running 0/8 Sep 11,2025 18:26 UTC+0800 check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula Delete Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:26 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:26 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:26 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done check ops status `kbcli cluster list-ops nebula-tdmfim --status all --namespace ns-dacvd ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME nebula-tdmfim-start-mwkps ns-dacvd Start nebula-tdmfim graphd,metad,storaged Succeed 8/8 Sep 11,2025 18:26 UTC+0800 check ops status done ops_status:nebula-tdmfim-start-mwkps ns-dacvd Start nebula-tdmfim graphd,metad,storaged Succeed 8/8 Sep 11,2025 18:26 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations nebula-tdmfim-start-mwkps --namespace ns-dacvd ` opsrequest.operations.kubeblocks.io/nebula-tdmfim-start-mwkps patched `kbcli cluster delete-ops --name nebula-tdmfim-start-mwkps --force --auto-approve --namespace ns-dacvd ` OpsRequest nebula-tdmfim-start-mwkps deleted cluster update terminationPolicy WipeOut `kbcli cluster update nebula-tdmfim --termination-policy=WipeOut --namespace ns-dacvd ` cluster.apps.kubeblocks.io/nebula-tdmfim updated check cluster status `kbcli cluster list nebula-tdmfim --show-labels --namespace ns-dacvd ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS nebula-tdmfim ns-dacvd nebula WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=nebula-tdmfim,clusterdefinition.kubeblocks.io/name=nebula check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances nebula-tdmfim --namespace ns-dacvd ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME nebula-tdmfim-graphd-0 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:26 UTC+0800 nebula-tdmfim-graphd-1 ns-dacvd nebula-tdmfim graphd Running 0 200m / 200m 644245094400m / 644245094400m logs:10Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:26 UTC+0800 nebula-tdmfim-metad-0 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-metad-1 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-metad-2 ns-dacvd nebula-tdmfim metad Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-storaged-0 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000004/10.224.0.6 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-storaged-1 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000002/10.224.0.9 Sep 11,2025 18:26 UTC+0800 logs:10Gi nebula-tdmfim-storaged-2 ns-dacvd nebula-tdmfim storaged Running 0 200m / 200m 644245094400m / 644245094400m data:6Gi aks-cicdamdpool-42771698-vmss000000/10.224.0.5 Sep 11,2025 18:26 UTC+0800 logs:10Gi check pod status done `kubectl get secrets -l app.kubernetes.io/instance=nebula-tdmfim` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.username***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.password***"` `kubectl get secrets nebula-tdmfim-graphd-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:K36od9t63Y9!#17#;DB_PORT:9669;DB_DATABASE:default check cluster connect `echo "/usr/local/nebula/console/nebula-console --addr nebula-tdmfim-graphd.ns-dacvd.svc.cluster.local --user root --password 'K36od9t63Y9!#17#' --port 9669" | kubectl exec -it nebula-tdmfim-storaged-0 --namespace ns-dacvd -- bash` check cluster connect done cluster list-logs `kbcli cluster list-logs nebula-tdmfim --namespace ns-dacvd ` No log files found. Error from server (NotFound): pods "nebula-tdmfim-graphd-0" not found cluster logs `kbcli cluster logs nebula-tdmfim --tail 30 --namespace ns-dacvd ` Defaulted container "graphd" out of: graphd, agent, exporter, kbagent, config-manager, init-console (init), init-agent (init), init-kbagent (init), kbagent-worker (init), install-config-manager-tool (init) I20250911 10:35:25.348403 36 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.3.206]:48442 E20250911 10:35:25.442160 33 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-tdmfim-storaged-1.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 tail: '/usr/local/nebula/logs/nebula-graphd.WARNING' has been replaced; following end of new file tail: '/usr/local/nebula/logs/nebula-graphd.ERROR' has been replaced; following end of new file I20250911 10:35:28.033756 62 MetaClient.cpp:3261] Load leader of "nebula-tdmfim-storaged-0.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 in 0 space I20250911 10:35:28.033803 62 MetaClient.cpp:3261] Load leader of "nebula-tdmfim-storaged-1.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 in 0 space I20250911 10:35:28.033830 62 MetaClient.cpp:3261] Load leader of "nebula-tdmfim-storaged-2.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 in 1 space I20250911 10:35:28.033838 62 MetaClient.cpp:3267] Load leader ok ==> /usr/local/nebula/logs/nebula-graphd.WARNING <== Log file created at: 2025/09/11 10:35:25 Running on machine: nebula-tdmfim-graphd-0 Running duration (h:mm:ss): 0:00:00 Log line format: [IWEF]yyyymmdd hh:mm:ss.uuuuuu threadid file:line] msg E20250911 10:35:25.442160 33 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-tdmfim-storaged-1.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 ==> /usr/local/nebula/logs/nebula-graphd.ERROR <== Log file created at: 2025/09/11 10:35:25 Running on machine: nebula-tdmfim-graphd-0 Running duration (h:mm:ss): 0:00:00 Log line format: [IWEF]yyyymmdd hh:mm:ss.uuuuuu threadid file:line] msg E20250911 10:35:25.442160 33 QueryInstance.cpp:151] Existed!, query: ADD HOSTS "nebula-tdmfim-storaged-1.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 ==> /usr/local/nebula/logs/nebula-graphd.INFO <== I20250911 10:35:38.058784 62 MetaClient.cpp:3261] Load leader of "nebula-tdmfim-storaged-0.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 in 0 space I20250911 10:35:38.058825 62 MetaClient.cpp:3261] Load leader of "nebula-tdmfim-storaged-1.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 in 0 space I20250911 10:35:38.058843 62 MetaClient.cpp:3261] Load leader of "nebula-tdmfim-storaged-2.nebula-tdmfim-storaged-headless.ns-dacvd.svc.cluster.local":9779 in 1 space I20250911 10:35:38.058849 62 MetaClient.cpp:3267] Load leader ok I20250911 10:40:48.321574 35 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.1.117]:60756 I20250911 10:41:02.523059 33 GraphService.cpp:77] Authenticating user root from [::ffff:10.244.1.117]:51916 delete cluster nebula-tdmfim `kbcli cluster delete nebula-tdmfim --auto-approve --namespace ns-dacvd ` Cluster nebula-tdmfim deleted pod_info:nebula-tdmfim-graphd-0 5/5 Running 0 14m nebula-tdmfim-graphd-1 5/5 Running 0 14m nebula-tdmfim-metad-0 4/4 Running 0 14m nebula-tdmfim-metad-1 4/4 Running 0 14m nebula-tdmfim-metad-2 4/4 Running 0 14m nebula-tdmfim-storaged-0 5/5 Running 0 14m nebula-tdmfim-storaged-1 5/5 Running 0 14m nebula-tdmfim-storaged-2 5/5 Running 0 14m No resources found in ns-dacvd namespace. delete cluster pod done No resources found in ns-dacvd namespace. check cluster resource non-exist OK: pvc No resources found in ns-dacvd namespace. delete cluster done No resources found in ns-dacvd namespace. No resources found in ns-dacvd namespace. No resources found in ns-dacvd namespace. Nebula Test Suite All Done! Test Engine: nebula Test Type: 12 --------------------------------------Nebula (Topology = default Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[ComponentDefinition=nebula-graphd-1.0.1;ComponentVersion=nebula;ServiceVersion=v3.5.0;]|[Description=Create a cluster with the specified component definition nebula-graphd-1.0.1 and component version nebula and service version v3.5.0] [PASSED]|[Connect]|[ComponentName=graphd]|[Description=Connect to the cluster] [PASSED]|[No-Failover]|[HA=Connection Stress;ComponentName=graphd]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[VerticalScaling]|[ComponentName=metad]|[Description=VerticalScaling the cluster specify component metad] [PASSED]|[VerticalScaling]|[ComponentName=storaged]|[Description=VerticalScaling the cluster specify component storaged] [PASSED]|[VolumeExpansion]|[ComponentName=metad;ComponentVolume=data]|[Description=VolumeExpansion the cluster specify component metad and volume data] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[Restart]|[ComponentName=metad]|[Description=Restart the cluster specify component metad] [PASSED]|[VolumeExpansion]|[ComponentName=graphd,metad,storaged;ComponentVolume=logs]|[Description=VolumeExpansion the cluster specify component graphd,metad,storaged and volume logs] [PASSED]|[HorizontalScaling Out]|[ComponentName=graphd]|[Description=HorizontalScaling Out the cluster specify component graphd] [PASSED]|[HorizontalScaling In]|[ComponentName=graphd]|[Description=HorizontalScaling In the cluster specify component graphd] [PASSED]|[VerticalScaling]|[ComponentName=graphd]|[Description=VerticalScaling the cluster specify component graphd] [PASSED]|[Restart]|[ComponentName=graphd]|[Description=Restart the cluster specify component graphd] [PASSED]|[Restart]|[ComponentName=storaged]|[Description=Restart the cluster specify component storaged] [PASSED]|[HorizontalScaling Out]|[ComponentName=storaged]|[Description=HorizontalScaling Out the cluster specify component storaged] [PASSED]|[HorizontalScaling In]|[ComponentName=storaged]|[Description=HorizontalScaling In the cluster specify component storaged] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]