source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-hagst ` `kubectl create namespace ns-hagst` namespace/ns-hagst created create namespace ns-hagst done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "0.9" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v0.9.4-beta.1` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 32.1M 100 32.1M 0 0 51.4M 0 --:--:-- --:--:-- --:--:-- 51.4M kbcli installed successfully. Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 0.9.4 kbcli: 0.9.4-beta.1 WARNING: version difference between kbcli (0.9.4-beta.1) and kubeblocks (0.9.4) Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v0.9.4-beta.1 done Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 0.9.4 kbcli: 0.9.4-beta.1 WARNING: version difference between kbcli (0.9.4-beta.1) and kubeblocks (0.9.4) Kubernetes Env: v1.32.5-eks-5d4a308 POD_RESOURCES: No resources found found default storage class: gp3 kubeblocks version is:0.9.4 skip upgrade kubeblocks Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check cluster definition set component name:frontend set component version No resources found no component version found unsupported component definition not found component version set replicas first:2 set replicas third:2 set minimum cmpv service version set minimum cmpv service version replicas:2 REPORT_COUNT:1 CLUSTER_TOPOLOGY: Not found topology in cluster definition greptimedb LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 1 No resources found in ns-hagst namespace. termination_policy:Delete create 2 replica Delete greptimedb cluster check cluster version check cluster definition apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: greptime-grhrmh namespace: ns-hagst spec: clusterDefinitionRef: greptimedb clusterVersionRef: greptimedb-0.3.2 terminationPolicy: Delete componentSpecs: - name: frontend componentDefRef: frontend replicas: 2 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi - name: datanode componentDefRef: datanode replicas: 2 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: datanode spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: meta componentDefRef: meta replicas: 1 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi - name: etcd componentDefRef: etcd replicas: 3 resources: requests: cpu: 100m memory: 0.5Gi limits: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: etcd-storage spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi `kubectl apply -f test_create_greptime-grhrmh.yaml` cluster.apps.kubeblocks.io/greptime-grhrmh created apply test_create_greptime-grhrmh.yaml Success `rm -rf test_create_greptime-grhrmh.yaml` check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Jun 19,2025 18:14 UTC+0800 clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status: cluster_status:Creating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-11-106.us-west-2.compute.internal/172.31.11.106 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:15 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-11-33.us-west-2.compute.internal/172.31.11.33 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:15 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:14 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=greptime-grhrmh` `kubectl get secrets greptime-grhrmh-conn-credential -o jsonpath="***.data.username***"` `kubectl get secrets greptime-grhrmh-conn-credential -o jsonpath="***.data.password***"` `kubectl get secrets greptime-grhrmh-conn-credential -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:fcvh959l;DB_PORT:4002;DB_DATABASE: check pod greptime-grhrmh-frontend-0 container_name frontend exist password fcvh959l No container logs contain secret password. describe cluster `kbcli cluster describe greptime-grhrmh --namespace ns-hagst ` Name: greptime-grhrmh Created Time: Jun 19,2025 18:14 UTC+0800 NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY ns-hagst greptimedb greptimedb-0.3.2 Running Delete Endpoints: COMPONENT MODE INTERNAL EXTERNAL frontend ReadWrite greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4000 greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001 greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4002 greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4003 greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4242 datanode ReadWrite greptime-grhrmh-datanode.ns-hagst.svc.cluster.local:4000 greptime-grhrmh-datanode.ns-hagst.svc.cluster.local:4001 meta ReadWrite greptime-grhrmh-meta.ns-hagst.svc.cluster.local:3002 greptime-grhrmh-meta.ns-hagst.svc.cluster.local:4000 etcd ReadWrite greptime-grhrmh-etcd.ns-hagst.svc.cluster.local:2379 greptime-grhrmh-etcd.ns-hagst.svc.cluster.local:2380 Topology: COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME datanode greptime-grhrmh-datanode-0 Running us-west-2a ip-172-31-11-106.us-west-2.compute.internal/172.31.11.106 Jun 19,2025 18:14 UTC+0800 datanode greptime-grhrmh-datanode-1 Running us-west-2a ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:15 UTC+0800 etcd greptime-grhrmh-etcd-0 follower Running us-west-2a ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:14 UTC+0800 etcd greptime-grhrmh-etcd-1 follower Running us-west-2a ip-172-31-11-33.us-west-2.compute.internal/172.31.11.33 Jun 19,2025 18:14 UTC+0800 etcd greptime-grhrmh-etcd-2 leader Running us-west-2a ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:14 UTC+0800 frontend greptime-grhrmh-frontend-0 Running us-west-2a ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:14 UTC+0800 frontend greptime-grhrmh-frontend-1 Running us-west-2a ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:15 UTC+0800 meta greptime-grhrmh-meta-0 Running us-west-2a ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:14 UTC+0800 Resources Allocation: COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS frontend false 100m / 100m 512Mi / 512Mi datanode false 100m / 100m 512Mi / 512Mi datanode:1Gi kb-default-sc meta false 100m / 100m 512Mi / 512Mi etcd false 100m / 100m 512Mi / 512Mi etcd-storage:1Gi kb-default-sc Images: COMPONENT TYPE IMAGE frontend frontend docker.io/apecloud/greptimedb:0.3.2 datanode datanode docker.io/apecloud/greptimedb:0.3.2 meta meta docker.io/apecloud/greptimedb:0.3.2 etcd etcd docker.io/apecloud/etcd:v3.5.5 Show cluster events: kbcli cluster list-events -n ns-hagst greptime-grhrmh `kbcli cluster label greptime-grhrmh app.kubernetes.io/instance- --namespace ns-hagst ` label "app.kubernetes.io/instance" not found. `kbcli cluster label greptime-grhrmh app.kubernetes.io/instance=greptime-grhrmh --namespace ns-hagst ` `kbcli cluster label greptime-grhrmh --list --namespace ns-hagst ` NAME NAMESPACE LABELS greptime-grhrmh ns-hagst app.kubernetes.io/instance=greptime-grhrmh clusterdefinition.kubeblocks.io/name=greptimedb clusterversion.kubeblocks.io/name=greptimedb-0.3.2 label cluster app.kubernetes.io/instance=greptime-grhrmh Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=greptime-grhrmh --namespace ns-hagst ` `kbcli cluster label greptime-grhrmh --list --namespace ns-hagst ` NAME NAMESPACE LABELS greptime-grhrmh ns-hagst app.kubernetes.io/instance=greptime-grhrmh case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=greptimedb clusterversion.kubeblocks.io/name=greptimedb-0.3.2 label cluster case.name=kbcli.test1 Success `kbcli cluster label greptime-grhrmh case.name=kbcli.test2 --overwrite --namespace ns-hagst ` `kbcli cluster label greptime-grhrmh --list --namespace ns-hagst ` NAME NAMESPACE LABELS greptime-grhrmh ns-hagst app.kubernetes.io/instance=greptime-grhrmh case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=greptimedb clusterversion.kubeblocks.io/name=greptimedb-0.3.2 label cluster case.name=kbcli.test2 Success `kbcli cluster label greptime-grhrmh case.name- --namespace ns-hagst ` `kbcli cluster label greptime-grhrmh --list --namespace ns-hagst ` NAME NAMESPACE LABELS greptime-grhrmh ns-hagst app.kubernetes.io/instance=greptime-grhrmh clusterdefinition.kubeblocks.io/name=greptimedb clusterversion.kubeblocks.io/name=greptimedb-0.3.2 delete cluster label case.name Success cluster connect `echo 'echo "show databases;" | greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash ` Defaulted container "frontend" out of: frontend, wait-datanode (init) Unable to use a TTY - input is not a terminal or the right kind of file 2025-06-19T10:16:48.033813Z INFO greptime: short_version: 0.3.2, full_version: greptimedb-HEAD-4b580f4 Ready for commands. (Hint: try 'help') 2025-06-19T10:16:48.034015Z INFO greptime: command line arguments 2025-06-19T10:16:48.034028Z INFO greptime: argument: greptime 2025-06-19T10:16:48.034044Z INFO greptime: argument: cli 2025-06-19T10:16:48.034050Z INFO greptime: argument: attach 2025-06-19T10:16:48.034056Z INFO greptime: argument: --grpc-addr 2025-06-19T10:16:48.034062Z INFO greptime: argument: greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001 +---------+ | Schemas | +---------+ | public | +---------+ Total Rows: 1 Cost 141 ms connect cluster Success check component datanode exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=datanode --namespace ns-hagst | (grep "datanode" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart greptime-grhrmh --auto-approve --force=true --components datanode --namespace ns-hagst ` OpsRequest greptime-grhrmh-restart-zllfw created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-restart-zllfw -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-restart-zllfw ns-hagst Restart greptime-grhrmh datanode Creating -/- Jun 19,2025 18:16 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-11-33.us-west-2.compute.internal/172.31.11.33 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:15 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:14 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-restart-zllfw ns-hagst Restart greptime-grhrmh datanode Succeed 2/2 Jun 19,2025 18:16 UTC+0800 check ops status done ops_status:greptime-grhrmh-restart-zllfw ns-hagst Restart greptime-grhrmh datanode Succeed 2/2 Jun 19,2025 18:16 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-restart-zllfw --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-restart-zllfw patched `kbcli cluster delete-ops --name greptime-grhrmh-restart-zllfw --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-restart-zllfw deleted check component frontend exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=frontend --namespace ns-hagst | (grep "frontend" || true )` check component datanode exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=datanode --namespace ns-hagst | (grep "datanode" || true )` cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in greptime-grhrmh namespace. `kbcli cluster hscale greptime-grhrmh --auto-approve --force=true --components frontend,datanode --replicas 3 --namespace ns-hagst ` OpsRequest greptime-grhrmh-horizontalscaling-mql4l created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-horizontalscaling-mql4l -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-horizontalscaling-mql4l ns-hagst HorizontalScaling greptime-grhrmh frontend,datanode Running 0/2 Jun 19,2025 18:18 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-datanode-2 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:18 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-11-33.us-west-2.compute.internal/172.31.11.33 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:15 UTC+0800 greptime-grhrmh-frontend-2 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:18 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:14 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done No resources found in greptime-grhrmh namespace. check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-horizontalscaling-mql4l ns-hagst HorizontalScaling greptime-grhrmh frontend,datanode Succeed 2/2 Jun 19,2025 18:18 UTC+0800 check ops status done ops_status:greptime-grhrmh-horizontalscaling-mql4l ns-hagst HorizontalScaling greptime-grhrmh frontend,datanode Succeed 2/2 Jun 19,2025 18:18 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-horizontalscaling-mql4l --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-horizontalscaling-mql4l patched `kbcli cluster delete-ops --name greptime-grhrmh-horizontalscaling-mql4l --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-horizontalscaling-mql4l deleted check component frontend exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=frontend --namespace ns-hagst | (grep "frontend" || true )` check component datanode exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=datanode --namespace ns-hagst | (grep "datanode" || true )` cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in greptime-grhrmh namespace. `kbcli cluster hscale greptime-grhrmh --auto-approve --force=true --components frontend,datanode --replicas 2 --namespace ns-hagst ` OpsRequest greptime-grhrmh-horizontalscaling-jj9tf created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-horizontalscaling-jj9tf -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-horizontalscaling-jj9tf ns-hagst HorizontalScaling greptime-grhrmh frontend,datanode Pending -/- Jun 19,2025 18:18 UTC+0800 ops_status:greptime-grhrmh-horizontalscaling-jj9tf ns-hagst HorizontalScaling greptime-grhrmh frontend,datanode Creating -/- Jun 19,2025 18:18 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Running Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-11-33.us-west-2.compute.internal/172.31.11.33 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:15 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:14 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done No resources found in greptime-grhrmh namespace. check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-horizontalscaling-jj9tf ns-hagst HorizontalScaling greptime-grhrmh frontend,datanode Succeed 2/2 Jun 19,2025 18:18 UTC+0800 check ops status done ops_status:greptime-grhrmh-horizontalscaling-jj9tf ns-hagst HorizontalScaling greptime-grhrmh frontend,datanode Succeed 2/2 Jun 19,2025 18:18 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-horizontalscaling-jj9tf --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-horizontalscaling-jj9tf patched `kbcli cluster delete-ops --name greptime-grhrmh-horizontalscaling-jj9tf --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-horizontalscaling-jj9tf deleted check component meta exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=meta --namespace ns-hagst | (grep "meta" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart greptime-grhrmh --auto-approve --force=true --components meta --namespace ns-hagst ` OpsRequest greptime-grhrmh-restart-kp6vk created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-restart-kp6vk -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-restart-kp6vk ns-hagst Restart greptime-grhrmh meta Running 0/1 Jun 19,2025 18:19 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-11-33.us-west-2.compute.internal/172.31.11.33 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 100m / 100m 512Mi / 512Mi etcd-storage:1Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:14 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-5-116.us-west-2.compute.internal/172.31.5.116 Jun 19,2025 18:15 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:19 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-restart-kp6vk ns-hagst Restart greptime-grhrmh meta Succeed 1/1 Jun 19,2025 18:19 UTC+0800 check ops status done ops_status:greptime-grhrmh-restart-kp6vk ns-hagst Restart greptime-grhrmh meta Succeed 1/1 Jun 19,2025 18:19 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-restart-kp6vk --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-restart-kp6vk patched `kbcli cluster delete-ops --name greptime-grhrmh-restart-kp6vk --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-restart-kp6vk deleted check component frontend exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=frontend --namespace ns-hagst | (grep "frontend" || true )` check component etcd exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=etcd --namespace ns-hagst | (grep "etcd" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale greptime-grhrmh --auto-approve --force=true --components frontend,etcd --cpu 200m --memory 0.6Gi --namespace ns-hagst ` OpsRequest greptime-grhrmh-verticalscaling-vf27r created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-verticalscaling-vf27r -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-verticalscaling-vf27r ns-hagst VerticalScaling greptime-grhrmh frontend,etcd Running 0/5 Jun 19,2025 18:20 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:21 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:20 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:20 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:20 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:20 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 100m / 100m 512Mi / 512Mi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:19 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-verticalscaling-vf27r ns-hagst VerticalScaling greptime-grhrmh frontend,etcd Succeed 5/5 Jun 19,2025 18:20 UTC+0800 check ops status done ops_status:greptime-grhrmh-verticalscaling-vf27r ns-hagst VerticalScaling greptime-grhrmh frontend,etcd Succeed 5/5 Jun 19,2025 18:20 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-verticalscaling-vf27r --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-verticalscaling-vf27r patched `kbcli cluster delete-ops --name greptime-grhrmh-verticalscaling-vf27r --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-verticalscaling-vf27r deleted check component meta exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=meta --namespace ns-hagst | (grep "meta" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale greptime-grhrmh --auto-approve --force=true --components meta --cpu 200m --memory 0.6Gi --namespace ns-hagst ` OpsRequest greptime-grhrmh-verticalscaling-5hrzg created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-verticalscaling-5hrzg -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-verticalscaling-5hrzg ns-hagst VerticalScaling greptime-grhrmh meta Running 0/1 Jun 19,2025 18:21 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:21 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:20 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:20 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:20 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:20 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:21 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-verticalscaling-5hrzg ns-hagst VerticalScaling greptime-grhrmh meta Succeed 1/1 Jun 19,2025 18:21 UTC+0800 check ops status done ops_status:greptime-grhrmh-verticalscaling-5hrzg ns-hagst VerticalScaling greptime-grhrmh meta Succeed 1/1 Jun 19,2025 18:21 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-verticalscaling-5hrzg --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-verticalscaling-5hrzg patched `kbcli cluster delete-ops --name greptime-grhrmh-verticalscaling-5hrzg --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-verticalscaling-5hrzg deleted 13 check component frontend exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=frontend --namespace ns-hagst | (grep "frontend" || true )` check component etcd exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=etcd --namespace ns-hagst | (grep "etcd" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart greptime-grhrmh --auto-approve --force=true --components frontend,etcd --namespace ns-hagst ` OpsRequest greptime-grhrmh-restart-hxxf7 created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-restart-hxxf7 -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-restart-hxxf7 ns-hagst Restart greptime-grhrmh frontend,etcd Running 0/5 Jun 19,2025 18:22 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:17 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-4-206.us-west-2.compute.internal/172.31.4.206 Jun 19,2025 18:22 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:22 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:22 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:22 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:22 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:21 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-restart-hxxf7 ns-hagst Restart greptime-grhrmh frontend,etcd Succeed 5/5 Jun 19,2025 18:22 UTC+0800 check ops status done ops_status:greptime-grhrmh-restart-hxxf7 ns-hagst Restart greptime-grhrmh frontend,etcd Succeed 5/5 Jun 19,2025 18:22 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-restart-hxxf7 --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-restart-hxxf7 patched `kbcli cluster delete-ops --name greptime-grhrmh-restart-hxxf7 --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-restart-hxxf7 deleted cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart greptime-grhrmh --auto-approve --force=true --namespace ns-hagst ` OpsRequest greptime-grhrmh-restart-hftdv created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-restart-hftdv -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-restart-hftdv ns-hagst Restart greptime-grhrmh frontend,datanode,meta,etcd Running 0/8 Jun 19,2025 18:23 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating cluster_status:Updating cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:1Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:23 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-restart-hftdv ns-hagst Restart greptime-grhrmh frontend,datanode,meta,etcd Succeed 8/8 Jun 19,2025 18:23 UTC+0800 check ops status done ops_status:greptime-grhrmh-restart-hftdv ns-hagst Restart greptime-grhrmh frontend,datanode,meta,etcd Succeed 8/8 Jun 19,2025 18:23 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-restart-hftdv --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-restart-hftdv patched `kbcli cluster delete-ops --name greptime-grhrmh-restart-hftdv --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-restart-hftdv deleted check component etcd exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=etcd --namespace ns-hagst | (grep "etcd" || true )` `kubectl get pvc -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=etcd,apps.kubeblocks.io/vct-name=etcd-storage --namespace ns-hagst ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in greptime-grhrmh namespace. `kbcli cluster volume-expand greptime-grhrmh --auto-approve --force=true --components etcd --volume-claim-templates etcd-storage --storage 3Gi --namespace ns-hagst ` OpsRequest greptime-grhrmh-volumeexpansion-v7lzr created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-volumeexpansion-v7lzr -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-volumeexpansion-v7lzr ns-hagst VolumeExpansion greptime-grhrmh etcd Running 0/3 Jun 19,2025 18:25 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:1Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:23 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done No resources found in greptime-grhrmh namespace. check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-volumeexpansion-v7lzr ns-hagst VolumeExpansion greptime-grhrmh etcd Succeed 3/3 Jun 19,2025 18:25 UTC+0800 check ops status done ops_status:greptime-grhrmh-volumeexpansion-v7lzr ns-hagst VolumeExpansion greptime-grhrmh etcd Succeed 3/3 Jun 19,2025 18:25 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-volumeexpansion-v7lzr --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-volumeexpansion-v7lzr patched `kbcli cluster delete-ops --name greptime-grhrmh-volumeexpansion-v7lzr --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-volumeexpansion-v7lzr deleted check component datanode exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=datanode --namespace ns-hagst | (grep "datanode" || true )` `kubectl get pvc -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=datanode,apps.kubeblocks.io/vct-name=datanode --namespace ns-hagst ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in greptime-grhrmh namespace. `kbcli cluster volume-expand greptime-grhrmh --auto-approve --force=true --components datanode --volume-claim-templates datanode --storage 6Gi --namespace ns-hagst ` OpsRequest greptime-grhrmh-volumeexpansion-qn5mq created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-volumeexpansion-qn5mq -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-volumeexpansion-qn5mq ns-hagst VolumeExpansion greptime-grhrmh datanode Running 0/2 Jun 19,2025 18:26 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:6Gi ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:6Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-15-105.us-west-2.compute.internal/172.31.15.105 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:24 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:23 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:23 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done No resources found in greptime-grhrmh namespace. check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-volumeexpansion-qn5mq ns-hagst VolumeExpansion greptime-grhrmh datanode Succeed 2/2 Jun 19,2025 18:26 UTC+0800 check ops status done ops_status:greptime-grhrmh-volumeexpansion-qn5mq ns-hagst VolumeExpansion greptime-grhrmh datanode Succeed 2/2 Jun 19,2025 18:26 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-volumeexpansion-qn5mq --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-volumeexpansion-qn5mq patched `kbcli cluster delete-ops --name greptime-grhrmh-volumeexpansion-qn5mq --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-volumeexpansion-qn5mq deleted cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop greptime-grhrmh --auto-approve --force=true --namespace ns-hagst ` OpsRequest greptime-grhrmh-stop-7zvxn created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-stop-7zvxn -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-stop-7zvxn ns-hagst Stop greptime-grhrmh datanode,etcd,frontend,meta Running 0/8 Jun 19,2025 18:26 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Stopping Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-stop-7zvxn ns-hagst Stop greptime-grhrmh datanode,etcd,frontend,meta Succeed 8/8 Jun 19,2025 18:26 UTC+0800 check ops status done ops_status:greptime-grhrmh-stop-7zvxn ns-hagst Stop greptime-grhrmh datanode,etcd,frontend,meta Succeed 8/8 Jun 19,2025 18:26 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-stop-7zvxn --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-stop-7zvxn patched `kbcli cluster delete-ops --name greptime-grhrmh-stop-7zvxn --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-stop-7zvxn deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start greptime-grhrmh --force=true --namespace ns-hagst ` OpsRequest greptime-grhrmh-start-z4246 created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-start-z4246 -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-start-z4246 ns-hagst Start greptime-grhrmh Jun 19,2025 18:27 UTC+0800 ops_status:greptime-grhrmh-start-z4246 ns-hagst Start greptime-grhrmh Pending -/- Jun 19,2025 18:27 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Abnormal Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Abnormal cluster_status:Abnormal cluster_status:Abnormal check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:6Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 100m / 100m 512Mi / 512Mi datanode:6Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:27 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-start-z4246 ns-hagst Start greptime-grhrmh datanode,etcd,frontend,meta Succeed 8/8 Jun 19,2025 18:27 UTC+0800 check ops status done ops_status:greptime-grhrmh-start-z4246 ns-hagst Start greptime-grhrmh datanode,etcd,frontend,meta Succeed 8/8 Jun 19,2025 18:27 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-start-z4246 --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-start-z4246 patched `kbcli cluster delete-ops --name greptime-grhrmh-start-z4246 --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-start-z4246 deleted check component datanode exists `kubectl get components -l app.kubernetes.io/instance=greptime-grhrmh,apps.kubeblocks.io/component-name=datanode --namespace ns-hagst | (grep "datanode" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale greptime-grhrmh --auto-approve --force=true --components datanode --cpu 200m --memory 0.6Gi --namespace ns-hagst ` OpsRequest greptime-grhrmh-verticalscaling-vrthd created successfully, you can view the progress: kbcli cluster describe-ops greptime-grhrmh-verticalscaling-vrthd -n ns-hagst check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-verticalscaling-vrthd ns-hagst VerticalScaling greptime-grhrmh datanode Running 0/2 Jun 19,2025 18:27 UTC+0800 check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 Delete Updating Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 200m / 200m 644245094400m / 644245094400m datanode:6Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:28 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 200m / 200m 644245094400m / 644245094400m datanode:6Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:28 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:27 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done check ops status `kbcli cluster list-ops greptime-grhrmh --status all --namespace ns-hagst ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME greptime-grhrmh-verticalscaling-vrthd ns-hagst VerticalScaling greptime-grhrmh datanode Succeed 2/2 Jun 19,2025 18:27 UTC+0800 check ops status done ops_status:greptime-grhrmh-verticalscaling-vrthd ns-hagst VerticalScaling greptime-grhrmh datanode Succeed 2/2 Jun 19,2025 18:27 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests greptime-grhrmh-verticalscaling-vrthd --namespace ns-hagst ` opsrequest.apps.kubeblocks.io/greptime-grhrmh-verticalscaling-vrthd patched `kbcli cluster delete-ops --name greptime-grhrmh-verticalscaling-vrthd --force --auto-approve --namespace ns-hagst ` OpsRequest greptime-grhrmh-verticalscaling-vrthd deleted cluster update terminationPolicy WipeOut `kbcli cluster update greptime-grhrmh --termination-policy=WipeOut --namespace ns-hagst ` cluster.apps.kubeblocks.io/greptime-grhrmh updated check cluster status `kbcli cluster list greptime-grhrmh --show-labels --namespace ns-hagst ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS greptime-grhrmh ns-hagst greptimedb greptimedb-0.3.2 WipeOut Running Jun 19,2025 18:14 UTC+0800 app.kubernetes.io/instance=greptime-grhrmh,clusterdefinition.kubeblocks.io/name=greptimedb,clusterversion.kubeblocks.io/name=greptimedb-0.3.2 check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances greptime-grhrmh --namespace ns-hagst ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME greptime-grhrmh-datanode-0 ns-hagst greptime-grhrmh datanode Running us-west-2a 200m / 200m 644245094400m / 644245094400m datanode:6Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:28 UTC+0800 greptime-grhrmh-datanode-1 ns-hagst greptime-grhrmh datanode Running us-west-2a 200m / 200m 644245094400m / 644245094400m datanode:6Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:28 UTC+0800 greptime-grhrmh-etcd-0 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-3-249.us-west-2.compute.internal/172.31.3.249 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-etcd-1 ns-hagst greptime-grhrmh etcd Running leader us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-8-137.us-west-2.compute.internal/172.31.8.137 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-etcd-2 ns-hagst greptime-grhrmh etcd Running follower us-west-2a 200m / 200m 644245094400m / 644245094400m etcd-storage:3Gi ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-frontend-0 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-12-68.us-west-2.compute.internal/172.31.12.68 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-frontend-1 ns-hagst greptime-grhrmh frontend Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-15-65.us-west-2.compute.internal/172.31.15.65 Jun 19,2025 18:27 UTC+0800 greptime-grhrmh-meta-0 ns-hagst greptime-grhrmh meta Running us-west-2a 200m / 200m 644245094400m / 644245094400m ip-172-31-2-13.us-west-2.compute.internal/172.31.2.13 Jun 19,2025 18:27 UTC+0800 check pod status done check cluster connect `echo 'greptime cli attach --grpc-addr greptime-grhrmh-frontend.ns-hagst.svc.cluster.local:4001' | kubectl exec -it greptime-grhrmh-frontend-0 --namespace ns-hagst -- bash` check cluster connect done cluster list-logs `kbcli cluster list-logs greptime-grhrmh --namespace ns-hagst ` No log files found. You can enable the log feature with the kbcli command below. kbcli cluster update greptime-grhrmh --enable-all-logs=true --namespace ns-hagst Error from server (NotFound): pods "greptime-grhrmh-frontend-0" not found cluster logs `kbcli cluster logs greptime-grhrmh --tail 30 --namespace ns-hagst ` Defaulted container "etcd" out of: etcd, lorry, volume-permissions (init) ***"level":"info","ts":"2025-06-19T10:27:26.305Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"d15463dfff6cb51f","local-member-attributes":"***Name:greptime-grhrmh-etcd-1 ClientURLs:[http://greptime-grhrmh-etcd-1.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2379]***","request-path":"/0/members/d15463dfff6cb51f/attributes","cluster-id":"557e970efa527975","publish-timeout":"7s"*** ***"level":"info","ts":"2025-06-19T10:27:26.305Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"*** ***"level":"info","ts":"2025-06-19T10:27:26.305Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"*** ***"level":"info","ts":"2025-06-19T10:27:26.306Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"*** ***"level":"info","ts":"2025-06-19T10:27:26.308Z","caller":"embed/serve.go:146","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"[::]:2379"*** ***"level":"warn","ts":"2025-06-19T10:27:26.309Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version","remote-member-id":"2a6cee8d3299031c","error":"Get \"http://greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version\": dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:26.310Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"2a6cee8d3299031c","error":"Get \"http://greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version\": dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:26.312Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://greptime-grhrmh-etcd-2.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version","remote-member-id":"e5c022b41c3f5a09","error":"Get \"http://greptime-grhrmh-etcd-2.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version\": dial tcp: lookup greptime-grhrmh-etcd-2.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:26.312Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"e5c022b41c3f5a09","error":"Get \"http://greptime-grhrmh-etcd-2.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version\": dial tcp: lookup greptime-grhrmh-etcd-2.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:27.716Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2a6cee8d3299031c","rtt":"0s","error":"dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:27.716Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2a6cee8d3299031c","rtt":"0s","error":"dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:27.717Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e5c022b41c3f5a09","rtt":"0s","error":"dial tcp: lookup greptime-grhrmh-etcd-2.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:27.717Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e5c022b41c3f5a09","rtt":"0s","error":"dial tcp: lookup greptime-grhrmh-etcd-2.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"info","ts":"2025-06-19T10:27:27.810Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e5c022b41c3f5a09"*** ***"level":"info","ts":"2025-06-19T10:27:27.811Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d15463dfff6cb51f","remote-peer-id":"e5c022b41c3f5a09"*** ***"level":"info","ts":"2025-06-19T10:27:27.811Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d15463dfff6cb51f","remote-peer-id":"e5c022b41c3f5a09"*** ***"level":"warn","ts":"2025-06-19T10:27:30.316Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"http://greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version","remote-member-id":"2a6cee8d3299031c","error":"Get \"http://greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version\": dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:30.316Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"2a6cee8d3299031c","error":"Get \"http://greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local:2380/version\": dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"info","ts":"2025-06-19T10:27:31.556Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d15463dfff6cb51f","to":"2a6cee8d3299031c","stream-type":"stream MsgApp v2"*** ***"level":"info","ts":"2025-06-19T10:27:31.556Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2a6cee8d3299031c"*** ***"level":"info","ts":"2025-06-19T10:27:31.556Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d15463dfff6cb51f","remote-peer-id":"2a6cee8d3299031c"*** ***"level":"info","ts":"2025-06-19T10:27:31.556Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d15463dfff6cb51f","to":"2a6cee8d3299031c","stream-type":"stream Message"*** ***"level":"info","ts":"2025-06-19T10:27:31.556Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d15463dfff6cb51f","remote-peer-id":"2a6cee8d3299031c"*** ***"level":"warn","ts":"2025-06-19T10:27:31.608Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2a6cee8d3299031c","error":"failed to dial 2a6cee8d3299031c on stream Message (dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host)"*** ***"level":"warn","ts":"2025-06-19T10:27:32.636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-06-19T10:27:26.407Z","time spent":"6.229598587s","remote":"172.31.7.0:35420","response type":"/v3electionpb.Election/Campaign","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""*** ***"level":"warn","ts":"2025-06-19T10:27:32.717Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2a6cee8d3299031c","rtt":"0s","error":"dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"warn","ts":"2025-06-19T10:27:32.717Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2a6cee8d3299031c","rtt":"0s","error":"dial tcp: lookup greptime-grhrmh-etcd-0.greptime-grhrmh-etcd-headless.ns-hagst.svc.cluster.local on 10.100.0.10:53: no such host"*** ***"level":"info","ts":"2025-06-19T10:27:32.805Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2a6cee8d3299031c"*** ***"level":"info","ts":"2025-06-19T10:27:32.805Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d15463dfff6cb51f","remote-peer-id":"2a6cee8d3299031c"*** ***"level":"info","ts":"2025-06-19T10:27:32.805Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d15463dfff6cb51f","remote-peer-id":"2a6cee8d3299031c"*** delete cluster greptime-grhrmh `kbcli cluster delete greptime-grhrmh --auto-approve --namespace ns-hagst ` Cluster greptime-grhrmh deleted pod_info:greptime-grhrmh-datanode-0 1/1 Running 0 42s greptime-grhrmh-datanode-1 1/1 Running 0 66s greptime-grhrmh-etcd-0 2/2 Running 0 118s greptime-grhrmh-etcd-1 2/2 Running 0 118s greptime-grhrmh-etcd-2 2/2 Running 0 118s greptime-grhrmh-frontend-0 1/1 Running 2 (109s ago) 118s greptime-grhrmh-frontend-1 1/1 Running 0 95s greptime-grhrmh-meta-0 1/1 Running 0 118s No resources found in ns-hagst namespace. delete cluster pod done No resources found in ns-hagst namespace. check cluster resource non-exist OK: pvc No resources found in ns-hagst namespace. delete cluster done No resources found in ns-hagst namespace. No resources found in ns-hagst namespace. No resources found in ns-hagst namespace. Greptimedb Test Suite All Done! --------------------------------------Greptimedb (Topology = Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[ClusterDefinition=greptimedb;ClusterVersion=greptimedb-0.3.2;]|[Description=Create a cluster with the specified cluster definition greptimedb and cluster version greptimedb-0.3.2] [PASSED]|[Connect]|[ComponentName=frontend]|[Description=Connect to the cluster] [PASSED]|[Restart]|[ComponentName=datanode]|[Description=Restart the cluster specify component datanode] [PASSED]|[HorizontalScaling Out]|[ComponentName=frontend,datanode]|[Description=HorizontalScaling Out the cluster specify component frontend,datanode] [PASSED]|[HorizontalScaling In]|[ComponentName=frontend,datanode]|[Description=HorizontalScaling In the cluster specify component frontend,datanode] [PASSED]|[Restart]|[ComponentName=meta]|[Description=Restart the cluster specify component meta] [PASSED]|[VerticalScaling]|[ComponentName=frontend,etcd]|[Description=VerticalScaling the cluster specify component frontend,etcd] [PASSED]|[VerticalScaling]|[ComponentName=meta]|[Description=VerticalScaling the cluster specify component meta] [PASSED]|[Restart]|[ComponentName=frontend,etcd]|[Description=Restart the cluster specify component frontend,etcd] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[VolumeExpansion]|[ComponentName=etcd;ComponentVolume=etcd-storage]|[Description=VolumeExpansion the cluster specify component etcd and volume etcd-storage] [PASSED]|[VolumeExpansion]|[ComponentName=datanode;ComponentVolume=datanode]|[Description=VolumeExpansion the cluster specify component datanode and volume datanode] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[VerticalScaling]|[ComponentName=datanode]|[Description=VerticalScaling the cluster specify component datanode] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]