bash test/kbcli/test_kbcli_0.9.sh --type 39 --version 0.9.5 --generate-output true --chaos-mesh true --drain-node true --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-htndi ` `kubectl create namespace ns-htndi` namespace/ns-htndi created create namespace ns-htndi done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "0.9" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v0.9.5-beta.8` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 32.1M 100 32.1M 0 0 216M 0 --:--:-- --:--:-- --:--:-- 216M kbcli installed successfully. Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v0.9.5-beta.8 done Kubernetes: v1.32.6 KubeBlocks: 0.9.5 kbcli: 0.9.5-beta.8 WARNING: version difference between kbcli (0.9.5-beta.8) and kubeblocks (0.9.5) Kubernetes Env: v1.32.6 POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default kubeblocks version is:0.9.5 skip upgrade kubeblocks Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check cluster definition set component name:jobmanager set component version No resources found no component version found unsupported component definition REPORT_COUNT 0:0 not found component version set replicas first:1 set replicas third:1 set minimum cmpv service version set minimum cmpv service version replicas:1 REPORT_COUNT:1 CLUSTER_TOPOLOGY: Not found topology in cluster definition flink LIMIT_CPU:0.1 LIMIT_MEMORY:1 storage size: 1 No resources found in ns-htndi namespace. termination_policy:Halt create 1 replica Halt flink cluster check cluster version check cluster definition apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: flink-rcogmr namespace: ns-htndi spec: clusterDefinitionRef: flink clusterVersionRef: flink-1.16 terminationPolicy: Halt componentSpecs: - name: jobmanager componentDefRef: jobmanager replicas: 1 resources: requests: cpu: 100m memory: 1Gi limits: cpu: 100m memory: 1Gi - name: taskmanager componentDefRef: taskmanager replicas: 1 resources: requests: cpu: 100m memory: 1Gi limits: cpu: 100m memory: 1Gi `kubectl apply -f test_create_flink-rcogmr.yaml` cluster.apps.kubeblocks.io/flink-rcogmr created apply test_create_flink-rcogmr.yaml Success `rm -rf test_create_flink-rcogmr.yaml` check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Sep 01,2025 11:18 UTC+0800 clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 cluster_status: check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 100m / 100m 1Gi / 1Gi aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:18 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 100m / 100m 1Gi / 1Gi aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:18 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=flink-rcogmr` `kubectl get secrets flink-rcogmr-conn-credential -o jsonpath="***.data.username***"` `kubectl get secrets flink-rcogmr-conn-credential -o jsonpath="***.data.password***"` `kubectl get secrets flink-rcogmr-conn-credential -o jsonpath="***.data.port***"` DB_USERNAME:;DB_PASSWORD:;DB_PORT:8081;DB_DATABASE: There is no password in Type: 39. describe cluster `kbcli cluster describe flink-rcogmr --namespace ns-htndi ` Name: flink-rcogmr Created Time: Sep 01,2025 11:18 UTC+0800 NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY ns-htndi flink flink-1.16 Running Halt Endpoints: COMPONENT MODE INTERNAL EXTERNAL jobmanager ReadWrite flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:6123 flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081 flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:6124 Topology: COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME jobmanager flink-rcogmr-jobmanager-0 Running 0 aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:18 UTC+0800 taskmanager flink-rcogmr-taskmanager-0 Running 0 aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:18 UTC+0800 Resources Allocation: COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS jobmanager false 100m / 100m 1Gi / 1Gi taskmanager false 100m / 100m 1Gi / 1Gi Images: COMPONENT TYPE IMAGE jobmanager jobmanager docker.io/apecloud/flink:1.16 taskmanager taskmanager docker.io/apecloud/flink:1.16 Show cluster events: kbcli cluster list-events -n ns-htndi flink-rcogmr `kbcli cluster label flink-rcogmr app.kubernetes.io/instance- --namespace ns-htndi ` label "app.kubernetes.io/instance" not found. `kbcli cluster label flink-rcogmr app.kubernetes.io/instance=flink-rcogmr --namespace ns-htndi ` `kbcli cluster label flink-rcogmr --list --namespace ns-htndi ` NAME NAMESPACE LABELS flink-rcogmr ns-htndi app.kubernetes.io/instance=flink-rcogmr clusterdefinition.kubeblocks.io/name=flink clusterversion.kubeblocks.io/name=flink-1.16 label cluster app.kubernetes.io/instance=flink-rcogmr Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=flink-rcogmr --namespace ns-htndi ` `kbcli cluster label flink-rcogmr --list --namespace ns-htndi ` NAME NAMESPACE LABELS flink-rcogmr ns-htndi app.kubernetes.io/instance=flink-rcogmr case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=flink clusterversion.kubeblocks.io/name=flink-1.16 label cluster case.name=kbcli.test1 Success `kbcli cluster label flink-rcogmr case.name=kbcli.test2 --overwrite --namespace ns-htndi ` `kbcli cluster label flink-rcogmr --list --namespace ns-htndi ` NAME NAMESPACE LABELS flink-rcogmr ns-htndi app.kubernetes.io/instance=flink-rcogmr case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=flink clusterversion.kubeblocks.io/name=flink-1.16 label cluster case.name=kbcli.test2 Success `kbcli cluster label flink-rcogmr case.name- --namespace ns-htndi ` `kbcli cluster label flink-rcogmr --list --namespace ns-htndi ` NAME NAMESPACE LABELS flink-rcogmr ns-htndi app.kubernetes.io/instance=flink-rcogmr clusterdefinition.kubeblocks.io/name=flink clusterversion.kubeblocks.io/name=flink-1.16 delete cluster label case.name Success cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash ` Unable to use a TTY - input is not a terminal or the right kind of file Apache Flink Web Dashboard connect cluster Success cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale flink-rcogmr --auto-approve --force=true --components jobmanager --cpu 200m --memory 1.1Gi --namespace ns-htndi ` OpsRequest flink-rcogmr-verticalscaling-pkk6j created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-verticalscaling-pkk6j -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-verticalscaling-pkk6j ns-htndi VerticalScaling flink-rcogmr jobmanager Creating -/- Sep 01,2025 11:20 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:20 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 100m / 100m 1Gi / 1Gi aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:18 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... check cluster connect done check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-verticalscaling-pkk6j ns-htndi VerticalScaling flink-rcogmr jobmanager Succeed 1/1 Sep 01,2025 11:20 UTC+0800 check ops status done ops_status:flink-rcogmr-verticalscaling-pkk6j ns-htndi VerticalScaling flink-rcogmr jobmanager Succeed 1/1 Sep 01,2025 11:20 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-verticalscaling-pkk6j --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-verticalscaling-pkk6j patched `kbcli cluster delete-ops --name flink-rcogmr-verticalscaling-pkk6j --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-verticalscaling-pkk6j deleted check component taskmanager exists `kubectl get components -l app.kubernetes.io/instance=flink-rcogmr,apps.kubeblocks.io/component-name=taskmanager --namespace ns-htndi | (grep "taskmanager" || true )` cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in flink-rcogmr namespace. `kbcli cluster hscale flink-rcogmr --auto-approve --force=true --components taskmanager --replicas 2 --namespace ns-htndi ` OpsRequest flink-rcogmr-horizontalscaling-tgzrt created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-horizontalscaling-tgzrt -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-horizontalscaling-tgzrt ns-htndi HorizontalScaling flink-rcogmr taskmanager Creating -/- Sep 01,2025 11:21 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:20 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 100m / 100m 1Gi / 1Gi aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:18 UTC+0800 flink-rcogmr-taskmanager-1 ns-htndi flink-rcogmr taskmanager Running 0 100m / 100m 1Gi / 1Gi aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:21 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` check cluster connect done No resources found in flink-rcogmr namespace. check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-horizontalscaling-tgzrt ns-htndi HorizontalScaling flink-rcogmr taskmanager Succeed 1/1 Sep 01,2025 11:21 UTC+0800 check ops status done ops_status:flink-rcogmr-horizontalscaling-tgzrt ns-htndi HorizontalScaling flink-rcogmr taskmanager Succeed 1/1 Sep 01,2025 11:21 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-horizontalscaling-tgzrt --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-horizontalscaling-tgzrt patched `kbcli cluster delete-ops --name flink-rcogmr-horizontalscaling-tgzrt --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-horizontalscaling-tgzrt deleted check component taskmanager exists `kubectl get components -l app.kubernetes.io/instance=flink-rcogmr,apps.kubeblocks.io/component-name=taskmanager --namespace ns-htndi | (grep "taskmanager" || true )` cluster hscale check cluster status before ops check cluster status done cluster_status:Running No resources found in flink-rcogmr namespace. `kbcli cluster hscale flink-rcogmr --auto-approve --force=true --components taskmanager --replicas 1 --namespace ns-htndi ` OpsRequest flink-rcogmr-horizontalscaling-f78r8 created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-horizontalscaling-f78r8 -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-horizontalscaling-f78r8 ns-htndi HorizontalScaling flink-rcogmr taskmanager Pending -/- Sep 01,2025 11:21 UTC+0800 ops_status:flink-rcogmr-horizontalscaling-f78r8 ns-htndi HorizontalScaling flink-rcogmr taskmanager Creating -/- Sep 01,2025 11:21 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:20 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 100m / 100m 1Gi / 1Gi aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:18 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` check cluster connect done No resources found in flink-rcogmr namespace. check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-horizontalscaling-f78r8 ns-htndi HorizontalScaling flink-rcogmr taskmanager Succeed 1/1 Sep 01,2025 11:21 UTC+0800 check ops status done ops_status:flink-rcogmr-horizontalscaling-f78r8 ns-htndi HorizontalScaling flink-rcogmr taskmanager Succeed 1/1 Sep 01,2025 11:21 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-horizontalscaling-f78r8 --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-horizontalscaling-f78r8 patched `kbcli cluster delete-ops --name flink-rcogmr-horizontalscaling-f78r8 --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-horizontalscaling-f78r8 deleted 8 cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop flink-rcogmr --auto-approve --force=true --namespace ns-htndi ` OpsRequest flink-rcogmr-stop-pj62l created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-stop-pj62l -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-stop-pj62l ns-htndi Stop flink-rcogmr Creating -/- Sep 01,2025 11:22 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Stopped Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-stop-pj62l ns-htndi Stop flink-rcogmr jobmanager,taskmanager Succeed 2/2 Sep 01,2025 11:22 UTC+0800 check ops status done ops_status:flink-rcogmr-stop-pj62l ns-htndi Stop flink-rcogmr jobmanager,taskmanager Succeed 2/2 Sep 01,2025 11:22 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-stop-pj62l --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-stop-pj62l patched `kbcli cluster delete-ops --name flink-rcogmr-stop-pj62l --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-stop-pj62l deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start flink-rcogmr --force=true --namespace ns-htndi ` OpsRequest flink-rcogmr-start-hfgmg created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-start-hfgmg -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-start-hfgmg ns-htndi Start flink-rcogmr Creating -/- Sep 01,2025 11:22 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:22 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 100m / 100m 1Gi / 1Gi aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:22 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... check cluster connect done check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-start-hfgmg ns-htndi Start flink-rcogmr jobmanager,taskmanager Succeed 2/2 Sep 01,2025 11:22 UTC+0800 check ops status done ops_status:flink-rcogmr-start-hfgmg ns-htndi Start flink-rcogmr jobmanager,taskmanager Succeed 2/2 Sep 01,2025 11:22 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-start-hfgmg --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-start-hfgmg patched `kbcli cluster delete-ops --name flink-rcogmr-start-hfgmg --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-start-hfgmg deleted `kubectl get secrets -l app.kubernetes.io/instance=flink-rcogmr` `kubectl get secrets flink-rcogmr-conn-credential -o jsonpath="***.data.username***"` `kubectl get secrets flink-rcogmr-conn-credential -o jsonpath="***.data.password***"` `kubectl get secrets flink-rcogmr-conn-credential -o jsonpath="***.data.port***"` DB_USERNAME:;DB_PASSWORD:;DB_PORT:8081;DB_DATABASE: `flink run /opt/flink/examples/batch/EnumTriangles.jar` Unable to use a TTY - input is not a terminal or the right kind of file exec return msg:Executing EnumTriangles example with default edges data set. Use --edges to specify file input. Printing result to stdout. Use --output to specify output path. Job has been submitted with JobID dba70fcb6ea8a9214d369f3e655be433 Program execution finished Job with JobID dba70fcb6ea8a9214d369f3e655be433 has finished. Job Runtime: 13906 ms Accumulator Results: - 27f4bcbae357f38fa0a6274b8e6ecd2d (java.util.ArrayList) [4 elements] (1,2,3) (1,2,5) (1,3,4) (3,7,8) Program execution finished check component taskmanager exists `kubectl get components -l app.kubernetes.io/instance=flink-rcogmr,apps.kubeblocks.io/component-name=taskmanager --namespace ns-htndi | (grep "taskmanager" || true )` cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart flink-rcogmr --auto-approve --force=true --components taskmanager --namespace ns-htndi ` OpsRequest flink-rcogmr-restart-jm9gk created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-restart-jm9gk -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-restart-jm9gk ns-htndi Restart flink-rcogmr taskmanager Creating -/- Sep 01,2025 11:25 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:22 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 100m / 100m 1Gi / 1Gi aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:25 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` check cluster connect done check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-restart-jm9gk ns-htndi Restart flink-rcogmr taskmanager Succeed 1/1 Sep 01,2025 11:25 UTC+0800 check ops status done ops_status:flink-rcogmr-restart-jm9gk ns-htndi Restart flink-rcogmr taskmanager Succeed 1/1 Sep 01,2025 11:25 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-restart-jm9gk --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-restart-jm9gk patched `kbcli cluster delete-ops --name flink-rcogmr-restart-jm9gk --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-restart-jm9gk deleted check component taskmanager exists `kubectl get components -l app.kubernetes.io/instance=flink-rcogmr,apps.kubeblocks.io/component-name=taskmanager --namespace ns-htndi | (grep "taskmanager" || true )` cluster vscale check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale flink-rcogmr --auto-approve --force=true --components taskmanager --cpu 200m --memory 1.1Gi --namespace ns-htndi ` OpsRequest flink-rcogmr-verticalscaling-f5b8d created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-verticalscaling-f5b8d -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-verticalscaling-f5b8d ns-htndi VerticalScaling flink-rcogmr taskmanager Creating -/- Sep 01,2025 11:25 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:22 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` check cluster connect done check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-verticalscaling-f5b8d ns-htndi VerticalScaling flink-rcogmr taskmanager Succeed 1/1 Sep 01,2025 11:25 UTC+0800 check ops status done ops_status:flink-rcogmr-verticalscaling-f5b8d ns-htndi VerticalScaling flink-rcogmr taskmanager Succeed 1/1 Sep 01,2025 11:25 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-verticalscaling-f5b8d --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-verticalscaling-f5b8d patched `kbcli cluster delete-ops --name flink-rcogmr-verticalscaling-f5b8d --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-verticalscaling-f5b8d deleted cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart flink-rcogmr --auto-approve --force=true --namespace ns-htndi ` OpsRequest flink-rcogmr-restart-jsd78 created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-restart-jsd78 -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-restart-jsd78 ns-htndi Restart flink-rcogmr jobmanager,taskmanager Creating -/- Sep 01,2025 11:26 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Updating Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:26 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` connect checking... check cluster connect done check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-restart-jsd78 ns-htndi Restart flink-rcogmr jobmanager,taskmanager Succeed 2/2 Sep 01,2025 11:26 UTC+0800 check ops status done ops_status:flink-rcogmr-restart-jsd78 ns-htndi Restart flink-rcogmr jobmanager,taskmanager Succeed 2/2 Sep 01,2025 11:26 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-restart-jsd78 --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-restart-jsd78 patched `kbcli cluster delete-ops --name flink-rcogmr-restart-jsd78 --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-restart-jsd78 deleted cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart flink-rcogmr --auto-approve --force=true --components jobmanager --namespace ns-htndi ` OpsRequest flink-rcogmr-restart-cr7ct created successfully, you can view the progress: kbcli cluster describe-ops flink-rcogmr-restart-cr7ct -n ns-htndi check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-restart-cr7ct ns-htndi Restart flink-rcogmr jobmanager Pending -/- Sep 01,2025 11:26 UTC+0800 check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 Halt Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:26 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` connect checking... connect checking... connect checking... connect checking... connect checking... connect checking... check cluster connect done check ops status `kbcli cluster list-ops flink-rcogmr --status all --namespace ns-htndi ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME flink-rcogmr-restart-cr7ct ns-htndi Restart flink-rcogmr jobmanager Succeed 1/1 Sep 01,2025 11:26 UTC+0800 check ops status done ops_status:flink-rcogmr-restart-cr7ct ns-htndi Restart flink-rcogmr jobmanager Succeed 1/1 Sep 01,2025 11:26 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests flink-rcogmr-restart-cr7ct --namespace ns-htndi ` opsrequest.apps.kubeblocks.io/flink-rcogmr-restart-cr7ct patched `kbcli cluster delete-ops --name flink-rcogmr-restart-cr7ct --force --auto-approve --namespace ns-htndi ` OpsRequest flink-rcogmr-restart-cr7ct deleted cluster update terminationPolicy WipeOut `kbcli cluster update flink-rcogmr --termination-policy=WipeOut --namespace ns-htndi ` cluster.apps.kubeblocks.io/flink-rcogmr updated check cluster status `kbcli cluster list flink-rcogmr --show-labels --namespace ns-htndi ` NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME LABELS flink-rcogmr ns-htndi flink flink-1.16 WipeOut Running Sep 01,2025 11:18 UTC+0800 app.kubernetes.io/instance=flink-rcogmr,clusterdefinition.kubeblocks.io/name=flink,clusterversion.kubeblocks.io/name=flink-1.16 check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances flink-rcogmr --namespace ns-htndi ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME flink-rcogmr-jobmanager-0 ns-htndi flink-rcogmr jobmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:26 UTC+0800 flink-rcogmr-taskmanager-0 ns-htndi flink-rcogmr taskmanager Running 0 200m / 200m 1181116006400m / 1181116006400m aks-cicdamdpool-18448605-vmss000001/10.224.0.5 Sep 01,2025 11:26 UTC+0800 check pod status done check cluster connect `echo "curl -s flink-rcogmr-jobmanager.ns-htndi.svc.cluster.local:8081" | kubectl exec -it flink-rcogmr-jobmanager-0 --namespace ns-htndi -- bash` check cluster connect done cluster list-logs `kbcli cluster list-logs flink-rcogmr --namespace ns-htndi ` No log files found. You can enable the log feature with the kbcli command below. kbcli cluster update flink-rcogmr --enable-all-logs=true --namespace ns-htndi Error from server (NotFound): pods "flink-rcogmr-jobmanager-0" not found cluster logs `kbcli cluster logs flink-rcogmr --tail 30 --namespace ns-htndi ` 2025-09-01 03:27:26,267 INFO akka.remote.RemoteActorRefProvider [] - Akka Cluster not in use - enabling unsafe features anyway because `akka.remote.use-unsafe-remote-features-outside-cluster` has been enabled. 2025-09-01 03:27:26,267 INFO akka.remote.Remoting [] - Starting remoting 2025-09-01 03:27:26,464 INFO akka.remote.Remoting [] - Remoting started; listening on addresses :[akka.tcp://flink-metrics@flink-rcogmr-jobmanager:40367] 2025-09-01 03:27:26,568 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - Actor system started at akka.tcp://flink-metrics@flink-rcogmr-jobmanager:40367 2025-09-01 03:27:26,769 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.metrics.dump.MetricQueryService at akka://flink-metrics/user/rpc/MetricQueryService . 2025-09-01 03:27:26,969 INFO org.apache.flink.runtime.dispatcher.FileExecutionGraphInfoStore [] - Initializing FileExecutionGraphInfoStore: Storage directory /tmp/executionGraphStore-0e673ab4-85ea-4414-ae15-98f7ce726bad, expiration time 3600000, maximum cache size 52428800 bytes. 2025-09-01 03:27:27,276 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Upload directory /tmp/flink-web-1b5e1ec2-97a6-4f45-944b-25f1e55e2e83/flink-web-upload does not exist. 2025-09-01 03:27:27,365 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Created directory /tmp/flink-web-1b5e1ec2-97a6-4f45-944b-25f1e55e2e83/flink-web-upload for file uploads. 2025-09-01 03:27:27,367 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Starting rest endpoint. 2025-09-01 03:27:29,372 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils [] - Determined location of main cluster component log file: /opt/flink/log/flink--standalonesession-0-flink-rcogmr-jobmanager-0.log 2025-09-01 03:27:29,373 INFO org.apache.flink.runtime.webmonitor.WebMonitorUtils [] - Determined location of main cluster component stdout file: /opt/flink/log/flink--standalonesession-0-flink-rcogmr-jobmanager-0.out 2025-09-01 03:27:30,668 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Rest endpoint listening at 0.0.0.0:8081 2025-09-01 03:27:30,765 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - http://0.0.0.0:8081 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000 2025-09-01 03:27:30,766 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Web frontend listening at http://0.0.0.0:8081. 2025-09-01 03:27:30,967 INFO org.apache.flink.runtime.dispatcher.runner.DefaultDispatcherRunner [] - DefaultDispatcherRunner was granted leadership with leader id 00000000-0000-0000-0000-000000000000. Creating new DispatcherLeaderProcess. 2025-09-01 03:27:30,972 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Start SessionDispatcherLeaderProcess. 2025-09-01 03:27:30,973 INFO org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl [] - Starting resource manager service. 2025-09-01 03:27:31,064 INFO org.apache.flink.runtime.resourcemanager.ResourceManagerServiceImpl [] - Resource manager service is granted leadership with session id 00000000-0000-0000-0000-000000000000. 2025-09-01 03:27:31,068 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Recover all persisted job graphs that are not finished, yet. 2025-09-01 03:27:31,068 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Successfully recovered 0 persisted job graphs. 2025-09-01 03:27:31,365 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/rpc/resourcemanager_0 . 2025-09-01 03:27:31,465 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.dispatcher.StandaloneDispatcher at akka://flink/user/rpc/dispatcher_1 . 2025-09-01 03:27:31,668 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Starting the resource manager. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.jboss.netty.util.internal.ByteBufferUtil (file:/tmp/flink-rpc-akka_3c787927-b2a4-4c09-948a-76acab645326.jar) to method java.nio.DirectByteBuffer.cleaner() WARNING: Please consider reporting this to the maintainers of org.jboss.netty.util.internal.ByteBufferUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 2025-09-01 03:27:36,764 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Registering TaskManager with ResourceID 10.244.1.131:34901-c388e7 (akka.tcp://flink@10.244.1.131:34901/user/rpc/taskmanager_0) at ResourceManager 2025-09-01 03:27:36,868 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Registering TaskManager with ResourceID 10.244.1.131:34901-c388e7 (akka.tcp://flink@10.244.1.131:34901/user/rpc/taskmanager_0) at ResourceManager delete cluster flink-rcogmr `kbcli cluster delete flink-rcogmr --auto-approve --namespace ns-htndi ` Cluster flink-rcogmr deleted pod_info:flink-rcogmr-jobmanager-0 1/1 Running 0 48s flink-rcogmr-taskmanager-0 1/1 Running 0 64s No resources found in ns-htndi namespace. delete cluster pod done No resources found in ns-htndi namespace. check cluster resource non-exist OK: pvc No resources found in ns-htndi namespace. delete cluster done No resources found in ns-htndi namespace. No resources found in ns-htndi namespace. No resources found in ns-htndi namespace. Flink Test Suite All Done! --------------------------------------Flink (Topology = Replicas 1) Test Result-------------------------------------- [PASSED]|[Create]|[ClusterDefinition=flink;ClusterVersion=flink-1.16;]|[Description=Create a cluster with the specified cluster definition flink and cluster version flink-1.16] [PASSED]|[Connect]|[ComponentName=jobmanager]|[Description=Connect to the cluster] [PASSED]|[VerticalScaling]|[ComponentName=jobmanager]|[Description=VerticalScaling the cluster specify component jobmanager] [PASSED]|[HorizontalScaling Out]|[ComponentName=taskmanager]|[Description=HorizontalScaling Out the cluster specify component taskmanager] [PASSED]|[HorizontalScaling In]|[ComponentName=taskmanager]|[Description=HorizontalScaling In the cluster specify component taskmanager] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[Connect]|[Endpoints=true]|[Description=Connect to the cluster] [PASSED]|[Restart]|[ComponentName=taskmanager]|[Description=Restart the cluster specify component taskmanager] [PASSED]|[VerticalScaling]|[ComponentName=taskmanager]|[Description=VerticalScaling the cluster specify component taskmanager] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[Restart]|[ComponentName=jobmanager]|[Description=Restart the cluster specify component jobmanager] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]