source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-jvbuz ` `kubectl create namespace ns-jvbuz` namespace/ns-jvbuz created create namespace ns-jvbuz done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.0` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 33.6M 100 33.6M 0 0 118M 0 --:--:-- --:--:-- --:--:-- 118M kbcli installed successfully. Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 1.0.0 kbcli: 1.0.0 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.0 done Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 1.0.0 kbcli: 1.0.0 Kubernetes Env: v1.32.5-eks-5d4a308 check snapshot controller check snapshot controller done eks default-vsc found POD_RESOURCES: No resources found found default storage class: gp3 KubeBlocks version is:1.0.0 skip upgrade KubeBlocks current KubeBlocks version: 1.0.0 Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:mogdb set component version set component version:mogdb set service versions:5.0.5 set service versions sorted:5.0.5 set mogdb component definition set mogdb component definition mogdb-1.0.0-alpha.0 set replicas first:2,5.0.5 set replicas third:2,5.0.5 set replicas fourth:2,5.0.5 set minimum cmpv service version set minimum cmpv service version replicas:2,5.0.5 REPORT_COUNT:1 Not support cluster topology for mogdb set mogdb component definition set mogdb component definition mogdb-1.0.0-alpha.0 LIMIT_CPU:0.5 LIMIT_MEMORY:1 storage size: 5 No resources found in ns-jvbuz namespace. termination_policy:DoNotTerminate create 2 replica DoNotTerminate mogdb cluster check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: mogdb-1.0.0-alpha.0 by component version:mogdb apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: mogdb-nbpumz namespace: ns-jvbuz spec: terminationPolicy: DoNotTerminate componentSpecs: - name: mogdb componentDef: mogdb-1.0.0-alpha.0 tls: false replicas: 2 resources: requests: cpu: 500m memory: 1Gi limits: cpu: 500m memory: 1Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi `kubectl apply -f test_create_mogdb-nbpumz.yaml` cluster.apps.kubeblocks.io/mogdb-nbpumz created apply test_create_mogdb-nbpumz.yaml Success `rm -rf test_create_mogdb-nbpumz.yaml` check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate May 28,2025 11:39 UTC+0800 cluster_status: cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1100m / 600m 1188Mi / 1124Mi data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:39 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1100m / 600m 1188Mi / 1124Mi data:5Gi ip-172-31-13-212.us-west-2.compute.internal/172.31.13.212 May 28,2025 11:40 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check pod mogdb-nbpumz-mogdb-0 container_name mogdb exist password 41696O66h*#xH7FI check pod mogdb-nbpumz-mogdb-0 container_name helper exist password 41696O66h*#xH7FI check pod mogdb-nbpumz-mogdb-0 container_name exporter exist password 41696O66h*#xH7FI check pod mogdb-nbpumz-mogdb-0 container_name kbagent exist password 41696O66h*#xH7FI No container logs contain secret password. describe cluster `kbcli cluster describe mogdb-nbpumz --namespace ns-jvbuz ` Name: mogdb-nbpumz Created Time: May 28,2025 11:39 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-jvbuz Running DoNotTerminate Endpoints: COMPONENT INTERNAL EXTERNAL mogdb mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local:26000 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME mogdb 5.0.5 mogdb-nbpumz-mogdb-0 primary Running us-west-2a ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:39 UTC+0800 mogdb 5.0.5 mogdb-nbpumz-mogdb-1 secondary Running us-west-2a ip-172-31-13-212.us-west-2.compute.internal/172.31.13.212 May 28,2025 11:40 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS mogdb 500m / 500m 1Gi / 1Gi data:5Gi kb-default-sc Images: COMPONENT COMPONENT-DEFINITION IMAGE mogdb mogdb-1.0.0-alpha.0 docker.io/apecloud/mogdb:5.0.5 docker.io/apecloud/mogdb-exporter:3.1.0 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-jvbuz mogdb-nbpumz `kbcli cluster label mogdb-nbpumz app.kubernetes.io/instance- --namespace ns-jvbuz ` label "app.kubernetes.io/instance" not found. `kbcli cluster label mogdb-nbpumz app.kubernetes.io/instance=mogdb-nbpumz --namespace ns-jvbuz ` `kbcli cluster label mogdb-nbpumz --list --namespace ns-jvbuz ` NAME NAMESPACE LABELS mogdb-nbpumz ns-jvbuz app.kubernetes.io/instance=mogdb-nbpumz label cluster app.kubernetes.io/instance=mogdb-nbpumz Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=mogdb-nbpumz --namespace ns-jvbuz ` `kbcli cluster label mogdb-nbpumz --list --namespace ns-jvbuz ` NAME NAMESPACE LABELS mogdb-nbpumz ns-jvbuz app.kubernetes.io/instance=mogdb-nbpumz case.name=kbcli.test1 label cluster case.name=kbcli.test1 Success `kbcli cluster label mogdb-nbpumz case.name=kbcli.test2 --overwrite --namespace ns-jvbuz ` `kbcli cluster label mogdb-nbpumz --list --namespace ns-jvbuz ` NAME NAMESPACE LABELS mogdb-nbpumz ns-jvbuz app.kubernetes.io/instance=mogdb-nbpumz case.name=kbcli.test2 label cluster case.name=kbcli.test2 Success `kbcli cluster label mogdb-nbpumz case.name- --namespace ns-jvbuz ` `kbcli cluster label mogdb-nbpumz --list --namespace ns-jvbuz ` NAME NAMESPACE LABELS mogdb-nbpumz ns-jvbuz app.kubernetes.io/instance=mogdb-nbpumz delete cluster label case.name Success cluster connect `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: `echo "echo \"create database if not exists benchtest;SELECT * from pg_stat_database;\" | gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash ` Defaulted container "mogdb" out of: mogdb, helper, exporter, kbagent, init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file CREATE DATABASE datid | datname | numbackends | xact_commit | xact_rollback | blks_read | blks_hit | tup_returned | tup_fetched | tup_inserted | tup_updated | tup_deleted | conflicts | temp_files | temp_bytes | deadlocks | blk_read_time | blk_write_time | stats_reset -------+-----------+-------------+-------------+---------------+-----------+----------+--------------+-------------+--------------+-------------+-------------+-----------+------------+------------+-----------+---------------+----------------+------------------------------- 1 | template1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 16384 | mogdb | 0 | 102 | 0 | 209 | 4091 | 1909 | 3588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2025-05-28 03:40:24.478447+00 16393 | benchtest | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15315 | template0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15320 | postgres | 11 | 501 | 0 | 296 | 19759 | 11573 | 22750 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2025-05-28 03:40:18.265067+00 (5 rows) connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-mogdb-nbpumz" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-mogdb-nbpumz --namespace ns-jvbuz ` Error from server (NotFound): pods "test-db-client-executionloop-mogdb-nbpumz" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-mogdb-nbpumz" not found `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-mogdb-nbpumz namespace: ns-jvbuz spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local" - "--user" - "root" - "--password" - "41696O66h*#xH7FI" - "--port" - "26000" - "--dbtype" - "mogdb" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-mogdb-nbpumz.yaml` pod/test-db-client-executionloop-mogdb-nbpumz created apply test-db-client-executionloop-mogdb-nbpumz.yaml Success `rm -rf test-db-client-executionloop-mogdb-nbpumz.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 5s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 10s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 16s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 22s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 28s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 34s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 39s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 45s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 51s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 1/1 Running 0 57s check pod test-db-client-executionloop-mogdb-nbpumz status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mogdb-nbpumz 0/1 Completed 0 63s check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate Running May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1100m / 600m 1188Mi / 1124Mi data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:39 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1100m / 600m 1188Mi / 1124Mi data:5Gi ip-172-31-13-212.us-west-2.compute.internal/172.31.13.212 May 28,2025 11:40 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done May 28, 2025 3:42:52 AM io.mogdb.core.v3.ConnectionFactoryImpl openConnectionImpl INFO: [6cd86f74-25a0-40e3-9ae4-a37300e4c47a] Try to connect. IP: mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local:26000 May 28, 2025 3:42:52 AM io.mogdb.core.v3.ConnectionFactoryImpl openConnectionImpl INFO: [172.31.7.224:34418/ogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local/10.100.72.217:26000] Connection is established. ID: 6cd86f74-25a0-40e3-9ae4-a37300e4c47a May 28, 2025 3:42:52 AM io.mogdb.core.v3.ConnectionFactoryImpl openConnectionImpl INFO: Connect complete. ID: 6cd86f74-25a0-40e3-9ae4-a37300e4c47a Execution loop start: create databases executions_loop CREATE DATABASE executions_loop; reconnect connection executions_loop May 28, 2025 3:42:53 AM io.mogdb.core.v3.ConnectionFactoryImpl openConnectionImpl INFO: [c2c849bb-8105-484a-8175-b68aeffb68dc] Try to connect. IP: mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local:26000 May 28, 2025 3:42:53 AM io.mogdb.core.v3.ConnectionFactoryImpl openConnectionImpl INFO: [172.31.7.224:34432/ogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local/10.100.72.217:26000] Connection is established. ID: c2c849bb-8105-484a-8175-b68aeffb68dc May 28, 2025 3:42:53 AM io.mogdb.core.v3.ConnectionFactoryImpl openConnectionImpl INFO: Connect complete. ID: c2c849bb-8105-484a-8175-b68aeffb68dc drop table executions_loop_table DROP TABLE IF EXISTS executions_loop_table; create table executions_loop_table CREATE TABLE IF NOT EXISTS executions_loop_table (id SERIAL PRIMARY KEY , value text); Execution loop start:INSERT INTO executions_loop_table (value) VALUES ('executions_loop_test_1'); [ 1s ] executions total: 20 successful: 20 failed: 0 disconnect: 0 [ 2s ] executions total: 231 successful: 231 failed: 0 disconnect: 0 [ 3s ] executions total: 453 successful: 453 failed: 0 disconnect: 0 [ 4s ] executions total: 674 successful: 674 failed: 0 disconnect: 0 [ 5s ] executions total: 916 successful: 916 failed: 0 disconnect: 0 [ 6s ] executions total: 1136 successful: 1136 failed: 0 disconnect: 0 [ 7s ] executions total: 1332 successful: 1332 failed: 0 disconnect: 0 [ 8s ] executions total: 1564 successful: 1564 failed: 0 disconnect: 0 [ 9s ] executions total: 1789 successful: 1789 failed: 0 disconnect: 0 [ 10s ] executions total: 2000 successful: 2000 failed: 0 disconnect: 0 [ 11s ] executions total: 2212 successful: 2212 failed: 0 disconnect: 0 [ 12s ] executions total: 2442 successful: 2442 failed: 0 disconnect: 0 [ 13s ] executions total: 2656 successful: 2656 failed: 0 disconnect: 0 [ 14s ] executions total: 2880 successful: 2880 failed: 0 disconnect: 0 [ 15s ] executions total: 3119 successful: 3119 failed: 0 disconnect: 0 [ 16s ] executions total: 3304 successful: 3304 failed: 0 disconnect: 0 [ 17s ] executions total: 3512 successful: 3512 failed: 0 disconnect: 0 [ 18s ] executions total: 3752 successful: 3752 failed: 0 disconnect: 0 [ 19s ] executions total: 3962 successful: 3962 failed: 0 disconnect: 0 [ 20s ] executions total: 4187 successful: 4187 failed: 0 disconnect: 0 [ 21s ] executions total: 4395 successful: 4395 failed: 0 disconnect: 0 [ 22s ] executions total: 4597 successful: 4597 failed: 0 disconnect: 0 [ 23s ] executions total: 4846 successful: 4846 failed: 0 disconnect: 0 [ 24s ] executions total: 5079 successful: 5079 failed: 0 disconnect: 0 [ 25s ] executions total: 5293 successful: 5293 failed: 0 disconnect: 0 [ 26s ] executions total: 5525 successful: 5525 failed: 0 disconnect: 0 [ 27s ] executions total: 5768 successful: 5768 failed: 0 disconnect: 0 [ 28s ] executions total: 5971 successful: 5971 failed: 0 disconnect: 0 [ 29s ] executions total: 6167 successful: 6167 failed: 0 disconnect: 0 [ 30s ] executions total: 6375 successful: 6375 failed: 0 disconnect: 0 [ 31s ] executions total: 6581 successful: 6581 failed: 0 disconnect: 0 [ 32s ] executions total: 6838 successful: 6838 failed: 0 disconnect: 0 [ 33s ] executions total: 7045 successful: 7045 failed: 0 disconnect: 0 [ 34s ] executions total: 7199 successful: 7199 failed: 0 disconnect: 0 [ 35s ] executions total: 7425 successful: 7425 failed: 0 disconnect: 0 [ 36s ] executions total: 7651 successful: 7651 failed: 0 disconnect: 0 [ 37s ] executions total: 7849 successful: 7849 failed: 0 disconnect: 0 [ 38s ] executions total: 8078 successful: 8078 failed: 0 disconnect: 0 [ 39s ] executions total: 8290 successful: 8290 failed: 0 disconnect: 0 [ 40s ] executions total: 8502 successful: 8502 failed: 0 disconnect: 0 [ 41s ] executions total: 8746 successful: 8746 failed: 0 disconnect: 0 [ 42s ] executions total: 9005 successful: 9005 failed: 0 disconnect: 0 [ 43s ] executions total: 9213 successful: 9213 failed: 0 disconnect: 0 [ 44s ] executions total: 9446 successful: 9446 failed: 0 disconnect: 0 [ 45s ] executions total: 9671 successful: 9671 failed: 0 disconnect: 0 [ 46s ] executions total: 9891 successful: 9891 failed: 0 disconnect: 0 [ 47s ] executions total: 10111 successful: 10111 failed: 0 disconnect: 0 [ 48s ] executions total: 10347 successful: 10347 failed: 0 disconnect: 0 [ 49s ] executions total: 10555 successful: 10555 failed: 0 disconnect: 0 [ 50s ] executions total: 10805 successful: 10805 failed: 0 disconnect: 0 [ 51s ] executions total: 11043 successful: 11043 failed: 0 disconnect: 0 [ 52s ] executions total: 11258 successful: 11258 failed: 0 disconnect: 0 [ 53s ] executions total: 11496 successful: 11496 failed: 0 disconnect: 0 [ 54s ] executions total: 11738 successful: 11738 failed: 0 disconnect: 0 [ 55s ] executions total: 11960 successful: 11960 failed: 0 disconnect: 0 [ 56s ] executions total: 12189 successful: 12189 failed: 0 disconnect: 0 [ 57s ] executions total: 12406 successful: 12406 failed: 0 disconnect: 0 [ 58s ] executions total: 12602 successful: 12602 failed: 0 disconnect: 0 [ 59s ] executions total: 12841 successful: 12841 failed: 0 disconnect: 0 [ 60s ] executions total: 13072 successful: 13072 failed: 0 disconnect: 0 Test Result: Total Executions: 13072 Successful Executions: 13072 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: mogdb Host: mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local Port: 26000 Database: Table: User: root Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 13072 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-mogdb-nbpumz --namespace ns-jvbuz ` pod/test-db-client-executionloop-mogdb-nbpumz patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-mogdb-nbpumz" force deleted LB_TYPE is set to: intranet cluster expose check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster expose mogdb-nbpumz --auto-approve --force=true --type intranet --enable true --components mogdb --role-selector primary --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-expose-9xnn2 created successfully, you can view the progress: kbcli cluster describe-ops mogdb-nbpumz-expose-9xnn2 -n ns-jvbuz check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-expose-9xnn2 ns-jvbuz Expose mogdb-nbpumz mogdb Running 0/1 May 28,2025 11:44 UTC+0800 check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate Running May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1100m / 600m 1188Mi / 1124Mi data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:39 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1100m / 600m 1188Mi / 1124Mi data:5Gi ip-172-31-13-212.us-west-2.compute.internal/172.31.13.212 May 28,2025 11:40 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-expose-9xnn2 ns-jvbuz Expose mogdb-nbpumz mogdb Succeed 1/1 May 28,2025 11:44 UTC+0800 check ops status done ops_status:mogdb-nbpumz-expose-9xnn2 ns-jvbuz Expose mogdb-nbpumz mogdb Succeed 1/1 May 28,2025 11:44 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mogdb-nbpumz-expose-9xnn2 --namespace ns-jvbuz ` opsrequest.operations.kubeblocks.io/mogdb-nbpumz-expose-9xnn2 patched `kbcli cluster delete-ops --name mogdb-nbpumz-expose-9xnn2 --force --auto-approve --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-expose-9xnn2 deleted `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check readonly db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h 127.0.0.1 -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-1 --namespace ns-jvbuz -- bash` check readonly db_client batch data Success check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale mogdb-nbpumz --auto-approve --force=true --components mogdb --cpu 600m --memory 1.1Gi --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-verticalscaling-tg2ml created successfully, you can view the progress: kbcli cluster describe-ops mogdb-nbpumz-verticalscaling-tg2ml -n ns-jvbuz check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-verticalscaling-tg2ml ns-jvbuz VerticalScaling mogdb-nbpumz mogdb Running 0/2 May 28,2025 11:44 UTC+0800 check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate Updating May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:46 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-13-212.us-west-2.compute.internal/172.31.13.212 May 28,2025 11:44 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-verticalscaling-tg2ml ns-jvbuz VerticalScaling mogdb-nbpumz mogdb Succeed 2/2 May 28,2025 11:44 UTC+0800 check ops status done ops_status:mogdb-nbpumz-verticalscaling-tg2ml ns-jvbuz VerticalScaling mogdb-nbpumz mogdb Succeed 2/2 May 28,2025 11:44 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mogdb-nbpumz-verticalscaling-tg2ml --namespace ns-jvbuz ` opsrequest.operations.kubeblocks.io/mogdb-nbpumz-verticalscaling-tg2ml patched `kbcli cluster delete-ops --name mogdb-nbpumz-verticalscaling-tg2ml --force --auto-approve --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-verticalscaling-tg2ml deleted `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check readonly db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h 127.0.0.1 -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-1 --namespace ns-jvbuz -- bash` check readonly db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart mogdb-nbpumz --auto-approve --force=true --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-restart-txd5c created successfully, you can view the progress: kbcli cluster describe-ops mogdb-nbpumz-restart-txd5c -n ns-jvbuz check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-restart-txd5c ns-jvbuz Restart mogdb-nbpumz mogdb Running 0/2 May 28,2025 11:47 UTC+0800 check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate Updating May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:49 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-13-212.us-west-2.compute.internal/172.31.13.212 May 28,2025 11:47 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-restart-txd5c ns-jvbuz Restart mogdb-nbpumz mogdb Succeed 2/2 May 28,2025 11:47 UTC+0800 check ops status done ops_status:mogdb-nbpumz-restart-txd5c ns-jvbuz Restart mogdb-nbpumz mogdb Succeed 2/2 May 28,2025 11:47 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mogdb-nbpumz-restart-txd5c --namespace ns-jvbuz ` opsrequest.operations.kubeblocks.io/mogdb-nbpumz-restart-txd5c patched `kbcli cluster delete-ops --name mogdb-nbpumz-restart-txd5c --force --auto-approve --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-restart-txd5c deleted `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check readonly db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h 127.0.0.1 -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-1 --namespace ns-jvbuz -- bash` check readonly db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop mogdb-nbpumz --auto-approve --force=true --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-stop-w46ng created successfully, you can view the progress: kbcli cluster describe-ops mogdb-nbpumz-stop-w46ng -n ns-jvbuz check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-stop-w46ng ns-jvbuz Stop mogdb-nbpumz mogdb Running 0/2 May 28,2025 11:50 UTC+0800 check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate Stopped May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-stop-w46ng ns-jvbuz Stop mogdb-nbpumz mogdb Succeed 2/2 May 28,2025 11:50 UTC+0800 check ops status done ops_status:mogdb-nbpumz-stop-w46ng ns-jvbuz Stop mogdb-nbpumz mogdb Succeed 2/2 May 28,2025 11:50 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mogdb-nbpumz-stop-w46ng --namespace ns-jvbuz ` opsrequest.operations.kubeblocks.io/mogdb-nbpumz-stop-w46ng patched `kbcli cluster delete-ops --name mogdb-nbpumz-stop-w46ng --force --auto-approve --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-stop-w46ng deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start mogdb-nbpumz --force=true --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-start-c2zd2 created successfully, you can view the progress: kbcli cluster describe-ops mogdb-nbpumz-start-c2zd2 -n ns-jvbuz check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-start-c2zd2 ns-jvbuz Start mogdb-nbpumz mogdb Running 0/2 May 28,2025 11:51 UTC+0800 check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate Updating May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:51 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-0-89.us-west-2.compute.internal/172.31.0.89 May 28,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-start-c2zd2 ns-jvbuz Start mogdb-nbpumz mogdb Succeed 2/2 May 28,2025 11:51 UTC+0800 check ops status done ops_status:mogdb-nbpumz-start-c2zd2 ns-jvbuz Start mogdb-nbpumz mogdb Succeed 2/2 May 28,2025 11:51 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mogdb-nbpumz-start-c2zd2 --namespace ns-jvbuz ` opsrequest.operations.kubeblocks.io/mogdb-nbpumz-start-c2zd2 patched `kbcli cluster delete-ops --name mogdb-nbpumz-start-c2zd2 --force --auto-approve --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-start-c2zd2 deleted `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check readonly db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h 127.0.0.1 -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-1 --namespace ns-jvbuz -- bash` check readonly db_client batch data Success cluster mogdb scale-out check cluster status before ops check cluster status done cluster_status:Running No resources found in mogdb-nbpumz namespace. `kbcli cluster scale-out mogdb-nbpumz --auto-approve --force=true --components mogdb --replicas 1 --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-horizontalscaling-2x9fd created successfully, you can view the progress: kbcli cluster describe-ops mogdb-nbpumz-horizontalscaling-2x9fd -n ns-jvbuz check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-horizontalscaling-2x9fd ns-jvbuz HorizontalScaling mogdb-nbpumz mogdb Running 0/1 May 28,2025 11:53 UTC+0800 check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate Updating May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:51 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-0-89.us-west-2.compute.internal/172.31.0.89 May 28,2025 11:51 UTC+0800 mogdb-nbpumz-mogdb-2 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-10-203.us-west-2.compute.internal/172.31.10.203 May 28,2025 11:53 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 mogdb-nbpumz-mogdb-2 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done No resources found in mogdb-nbpumz namespace. check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-horizontalscaling-2x9fd ns-jvbuz HorizontalScaling mogdb-nbpumz mogdb Succeed 1/1 May 28,2025 11:53 UTC+0800 check ops status done ops_status:mogdb-nbpumz-horizontalscaling-2x9fd ns-jvbuz HorizontalScaling mogdb-nbpumz mogdb Succeed 1/1 May 28,2025 11:53 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mogdb-nbpumz-horizontalscaling-2x9fd --namespace ns-jvbuz ` opsrequest.operations.kubeblocks.io/mogdb-nbpumz-horizontalscaling-2x9fd patched `kbcli cluster delete-ops --name mogdb-nbpumz-horizontalscaling-2x9fd --force --auto-approve --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-horizontalscaling-2x9fd deleted `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check readonly db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h 127.0.0.1 -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-1 --namespace ns-jvbuz -- bash` check readonly db_client batch data Success cluster mogdb scale-in check cluster status before ops check cluster status done cluster_status:Running No resources found in mogdb-nbpumz namespace. `kbcli cluster scale-in mogdb-nbpumz --auto-approve --force=true --components mogdb --replicas 1 --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-horizontalscaling-tmm4j created successfully, you can view the progress: kbcli cluster describe-ops mogdb-nbpumz-horizontalscaling-tmm4j -n ns-jvbuz check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-horizontalscaling-tmm4j ns-jvbuz HorizontalScaling mogdb-nbpumz mogdb Running 0/1 May 28,2025 11:55 UTC+0800 check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz DoNotTerminate Running May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:51 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-0-89.us-west-2.compute.internal/172.31.0.89 May 28,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done No resources found in mogdb-nbpumz namespace. check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-horizontalscaling-tmm4j ns-jvbuz HorizontalScaling mogdb-nbpumz mogdb Succeed 1/1 May 28,2025 11:55 UTC+0800 check ops status done ops_status:mogdb-nbpumz-horizontalscaling-tmm4j ns-jvbuz HorizontalScaling mogdb-nbpumz mogdb Succeed 1/1 May 28,2025 11:55 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mogdb-nbpumz-horizontalscaling-tmm4j --namespace ns-jvbuz ` opsrequest.operations.kubeblocks.io/mogdb-nbpumz-horizontalscaling-tmm4j patched `kbcli cluster delete-ops --name mogdb-nbpumz-horizontalscaling-tmm4j --force --auto-approve --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-horizontalscaling-tmm4j deleted `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check readonly db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h 127.0.0.1 -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-1 --namespace ns-jvbuz -- bash` check readonly db_client batch data Success 5 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: apiVersion: v1 kind: Pod metadata: name: benchtest-mogdb-nbpumz namespace: ns-jvbuz spec: containers: - name: test-sysbench imagePullPolicy: IfNotPresent image: docker.io/apecloud/customsuites:latest env: - name: TYPE value: "2" - name: FLAG value: "0" - name: CONFIGS value: "mode:all,driver:pgsql,host:mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local,user:root,password:41696O66h*#xH7FI,port:26000,db:benchtest,tables:5,threads:4,times:10,size:1000,type:oltp_read_write" restartPolicy: Never `kubectl apply -f benchtest-mogdb-nbpumz.yaml` pod/benchtest-mogdb-nbpumz created apply benchtest-mogdb-nbpumz.yaml Success `rm -rf benchtest-mogdb-nbpumz.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 1s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 6s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 12s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 17s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 23s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 1/1 Running 0 29s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 1/1 Running 0 35s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 1/1 Running 0 41s check pod benchtest-mogdb-nbpumz status done pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 Completed 0 46s `kubectl logs benchtest-mogdb-nbpumz --tail 30 --namespace ns-jvbuz ` [ 7s ] thds: 4 tps: 94.01 qps: 1899.29 (r/w/o: 1328.20/382.06/189.03) lat (ms,99%): 81.48 err/s: 0.00 reconn/s: 0.00 [ 8s ] thds: 4 tps: 87.00 qps: 1736.98 (r/w/o: 1218.99/344.00/174.00) lat (ms,99%): 97.55 err/s: 0.00 reconn/s: 0.00 [ 9s ] thds: 4 tps: 6.00 qps: 115.99 (r/w/o: 76.99/25.00/14.00) lat (ms,99%): 1032.01 err/s: 1.00 reconn/s: 0.00 [ 10s ] thds: 4 tps: 97.00 qps: 1959.02 (r/w/o: 1371.02/394.00/194.00) lat (ms,99%): 960.30 err/s: 0.00 reconn/s: 0.00 SQL statistics: queries performed: read: 10934 write: 3113 other: 1566 total: 15613 transactions: 778 (77.21 per sec.) queries: 15613 (1549.55 per sec.) ignored errors: 3 (0.30 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 10.0744s total number of events: 778 Latency (ms): min: 12.25 avg: 51.60 max: 1039.25 99th percentile: 101.13 sum: 40144.96 Threads fairness: events (avg/stddev): 194.5000/5.68 execution time (avg/stddev): 10.0362/0.04 `kubectl delete pod benchtest-mogdb-nbpumz --force --namespace ns-jvbuz ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "benchtest-mogdb-nbpumz" force deleted LB_TYPE is set to: intranet No resources found in ns-jvbuz namespace. `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: apiVersion: v1 kind: Pod metadata: name: benchtest-mogdb-nbpumz namespace: ns-jvbuz spec: containers: - name: test-sysbench imagePullPolicy: IfNotPresent image: docker.io/apecloud/customsuites:latest env: - name: TYPE value: "2" - name: FLAG value: "0" - name: CONFIGS value: "mode:all,driver:pgsql,host:ada2428e90aff42c2bcbefb80f392d46-9829420c0a0ff931.elb.us-west-2.amazonaws.com,user:root,password:41696O66h*#xH7FI,port:26000,db:benchtest,tables:5,threads:4,times:10,size:1000,type:oltp_read_write" restartPolicy: Never `kubectl apply -f benchtest-mogdb-nbpumz.yaml` pod/benchtest-mogdb-nbpumz created apply benchtest-mogdb-nbpumz.yaml Success `rm -rf benchtest-mogdb-nbpumz.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 1s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 6s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 12s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 18s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 ContainerCreating 0 23s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 1/1 Running 0 29s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 1/1 Running 0 35s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 1/1 Running 0 41s pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 1/1 Running 0 47s check pod benchtest-mogdb-nbpumz status done pod_status:NAME READY STATUS RESTARTS AGE benchtest-mogdb-nbpumz 0/1 Completed 0 52s `kubectl logs benchtest-mogdb-nbpumz --tail 30 --namespace ns-jvbuz ` [ 7s ] thds: 4 tps: 72.18 qps: 1484.21 (r/w/o: 1045.05/290.74/148.42) lat (ms,99%): 170.48 err/s: 2.03 reconn/s: 0.00 [ 8s ] thds: 4 tps: 13.00 qps: 273.02 (r/w/o: 192.01/55.00/26.00) lat (ms,99%): 78.60 err/s: 0.00 reconn/s: 0.00 [ 9s ] thds: 4 tps: 79.99 qps: 1586.89 (r/w/o: 1110.92/314.98/160.99) lat (ms,99%): 1032.01 err/s: 1.00 reconn/s: 0.00 [ 10s ] thds: 4 tps: 72.99 qps: 1470.77 (r/w/o: 1025.84/297.95/146.98) lat (ms,99%): 99.33 err/s: 0.00 reconn/s: 0.00 SQL statistics: queries performed: read: 10010 write: 2844 other: 1435 total: 14289 transactions: 710 (70.78 per sec.) queries: 14289 (1424.51 per sec.) ignored errors: 5 (0.50 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 10.0293s total number of events: 710 Latency (ms): min: 13.86 avg: 56.40 max: 1047.92 99th percentile: 170.48 sum: 40045.98 Threads fairness: events (avg/stddev): 177.5000/3.91 execution time (avg/stddev): 10.0115/0.01 `kubectl delete pod benchtest-mogdb-nbpumz --force --namespace ns-jvbuz ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "benchtest-mogdb-nbpumz" force deleted cluster update terminationPolicy WipeOut `kbcli cluster update mogdb-nbpumz --termination-policy=WipeOut --namespace ns-jvbuz ` cluster.apps.kubeblocks.io/mogdb-nbpumz updated check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz WipeOut Running May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:51 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-0-89.us-west-2.compute.internal/172.31.0.89 May 28,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done cluster volume-snapshot backup `kbcli cluster backup mogdb-nbpumz --method volume-snapshot --namespace ns-jvbuz ` Backup backup-ns-jvbuz-mogdb-nbpumz-20250528120051 created successfully, you can view the progress: kbcli cluster list-backups --name=backup-ns-jvbuz-mogdb-nbpumz-20250528120051 -n ns-jvbuz check backup status `kbcli cluster list-backups mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-jvbuz-mogdb-nbpumz-20250528120051 ns-jvbuz mogdb-nbpumz volume-snapshot Running Delete May 28,2025 12:00 UTC+0800 backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running backup_status:mogdb-nbpumz-volume-snapshot-Running check backup status done backup_status:backup-ns-jvbuz-mogdb-nbpumz-20250528120051 ns-jvbuz mogdb-nbpumz volume-snapshot Completed 5Gi 2m12s Delete May 28,2025 12:00 UTC+0800 May 28,2025 12:03 UTC+0800 cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "mogdb-nbpumz-backup" not found `kbcli cluster describe-backup --names backup-ns-jvbuz-mogdb-nbpumz-20250528120051 --namespace ns-jvbuz ` Name: backup-ns-jvbuz-mogdb-nbpumz-20250528120051 Cluster: mogdb-nbpumz Namespace: ns-jvbuz Spec: Method: volume-snapshot Policy Name: mogdb-nbpumz-mogdb-backup-policy Actions: createVolumeSnapshot-0: panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x2b1becf] goroutine 1 [running]: github.com/apecloud/kbcli/pkg/cmd/dataprotection.PrintBackupObjDescribe(0xc00103a300, 0xc000894008) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:480 +0x4cf github.com/apecloud/kbcli/pkg/cmd/dataprotection.DescribeBackups(0xc00103a300, ***0xc00128f220?, 0x18fd69b?, 0xc0013f2b48?***) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:458 +0x125 github.com/apecloud/kbcli/pkg/cmd/cluster.describeBackups(0x0?, ***0xc0004dfec0?, 0x0?, 0xcfdcafea00000000?***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:204 +0x66 github.com/apecloud/kbcli/pkg/cmd/cluster.NewDescribeBackupCmd.func1(0xc001248608?, ***0xc0004dfec0, 0x0, 0x4***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:195 +0xe5 github.com/spf13/cobra.(*Command).execute(0xc001248608, ***0xc0004dfe80, 0x4, 0x4***) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xab1 github.com/spf13/cobra.(*Command).ExecuteC(0xc0009a5b08) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff github.com/spf13/cobra.(*Command).Execute(...) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041 k8s.io/component-base/cli.run(0xc0009a5b08) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:146 +0x290 k8s.io/component-base/cli.RunNoErrOutput(...) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:84 main.main() /home/runner/work/kbcli/kbcli/cmd/cli/main.go:31 +0x18 `kbcli cluster restore mogdb-nbpumz-backup --backup backup-ns-jvbuz-mogdb-nbpumz-20250528120051 --namespace ns-jvbuz ` Cluster mogdb-nbpumz-backup created check cluster status `kbcli cluster list mogdb-nbpumz-backup --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz-backup ns-jvbuz WipeOut Creating May 28,2025 12:03 UTC+0800 cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz-backup --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-backup-mogdb-0 ns-jvbuz mogdb-nbpumz-backup mogdb Running primary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-12-152.us-west-2.compute.internal/172.31.12.152 May 28,2025 12:03 UTC+0800 mogdb-nbpumz-backup-mogdb-1 ns-jvbuz mogdb-nbpumz-backup mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-6-100.us-west-2.compute.internal/172.31.6.100 May 28,2025 12:04 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-backup-mogdb-0;secondary: mogdb-nbpumz-backup-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz-backup` `kubectl get secrets mogdb-nbpumz-backup-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-backup-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-backup-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-backup-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-backup-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done `kbcli cluster describe-backup --names backup-ns-jvbuz-mogdb-nbpumz-20250528120051 --namespace ns-jvbuz ` Name: backup-ns-jvbuz-mogdb-nbpumz-20250528120051 Cluster: mogdb-nbpumz Namespace: ns-jvbuz Spec: Method: volume-snapshot Policy Name: mogdb-nbpumz-mogdb-backup-policy Actions: createVolumeSnapshot-0: panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x2b1becf] goroutine 1 [running]: github.com/apecloud/kbcli/pkg/cmd/dataprotection.PrintBackupObjDescribe(0xc00129e540, 0xc000936588) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:480 +0x4cf github.com/apecloud/kbcli/pkg/cmd/dataprotection.DescribeBackups(0xc00129e540, ***0xc0015256d0?, 0x18fd69b?, 0xc0015726c8?***) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:458 +0x125 github.com/apecloud/kbcli/pkg/cmd/cluster.describeBackups(0x0?, ***0xc000935400?, 0x0?, 0x6d3807a00000000?***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:204 +0x66 github.com/apecloud/kbcli/pkg/cmd/cluster.NewDescribeBackupCmd.func1(0xc001533508?, ***0xc000935400, 0x0, 0x4***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:195 +0xe5 github.com/spf13/cobra.(*Command).execute(0xc001533508, ***0xc0009353c0, 0x4, 0x4***) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xab1 github.com/spf13/cobra.(*Command).ExecuteC(0xc0008a0908) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff github.com/spf13/cobra.(*Command).Execute(...) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041 k8s.io/component-base/cli.run(0xc0008a0908) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:146 +0x290 k8s.io/component-base/cli.RunNoErrOutput(...) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:84 main.main() /home/runner/work/kbcli/kbcli/cmd/cli/main.go:31 +0x18 cluster connect `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz-backup` `kubectl get secrets mogdb-nbpumz-backup-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-backup-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-backup-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: `echo "echo \"create database if not exists benchtest;SELECT * from pg_stat_database;\" | gsql -h mogdb-nbpumz-backup-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-backup-mogdb-0 --namespace ns-jvbuz -- bash ` Defaulted container "mogdb" out of: mogdb, helper, exporter, kbagent, init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file NOTICE: database "benchtest" already exists, skipping CREATE DATABASE datid | datname | numbackends | xact_commit | xact_rollback | blks_read | blks_hit | tup_returned | tup_fetched | tup_inserted | tup_updated | tup_deleted | conflicts | temp_files | temp_bytes | deadlocks | blk_read_time | blk_write_time | stats_reset -------+-----------------+-------------+-------------+---------------+-----------+----------+--------------+-------------+--------------+-------------+-------------+-----------+------------+------------+-----------+---------------+----------------+------------------------------- 1 | template1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 16384 | mogdb | 2 | 118 | 0 | 308 | 10428 | 12431 | 9600 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2025-05-28 04:03:46.383307+00 16393 | benchtest | 2 | 113 | 0 | 285 | 8328 | 10923 | 8364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2025-05-28 04:03:46.506554+00 15315 | template0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 16394 | executions_loop | 2 | 117 | 0 | 293 | 7857 | 11621 | 7882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2025-05-28 04:03:46.737517+00 15320 | postgres | 13 | 440 | 1 | 553 | 26013 | 26166 | 31552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2025-05-28 04:03:42.616207+00 (6 rows) connect cluster Success delete cluster mogdb-nbpumz-backup `kbcli cluster delete mogdb-nbpumz-backup --auto-approve --namespace ns-jvbuz ` Cluster mogdb-nbpumz-backup deleted pod_info:mogdb-nbpumz-backup-mogdb-0 4/4 Terminating 0 2m59s mogdb-nbpumz-backup-mogdb-1 4/4 Terminating 0 118s No resources found in ns-jvbuz namespace. delete cluster pod done No resources found in ns-jvbuz namespace. check cluster resource non-exist OK: pvc No resources found in ns-jvbuz namespace. delete cluster done No resources found in ns-jvbuz namespace. No resources found in ns-jvbuz namespace. No resources found in ns-jvbuz namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-jvbuz-mogdb-nbpumz-20250528120051 --namespace ns-jvbuz ` backup.dataprotection.kubeblocks.io/backup-ns-jvbuz-mogdb-nbpumz-20250528120051 patched `kbcli cluster delete-backup mogdb-nbpumz --name backup-ns-jvbuz-mogdb-nbpumz-20250528120051 --force --auto-approve --namespace ns-jvbuz ` Backup backup-ns-jvbuz-mogdb-nbpumz-20250528120051 deleted No opsrequests found in ns-jvbuz namespace. cluster list-logs `kbcli cluster list-logs mogdb-nbpumz --namespace ns-jvbuz ` No log files found. Error from server (NotFound): pods "mogdb-nbpumz-mogdb-0" not found cluster logs `kbcli cluster logs mogdb-nbpumz --tail 30 --namespace ns-jvbuz ` Defaulted container "mogdb" out of: mogdb, helper, exporter, kbagent, init-kbagent (init), kbagent-worker (init) 0 WARNING: failed to open feature control file, please check whether it exists: FileName=gaussdb.version, Errno=2, Errmessage=No such file or directory. 0 WARNING: failed to parse feature control file: gaussdb.version. 0 WARNING: Failed to load the product control file, so gaussdb cannot distinguish product version. 2025-05-28 03:51:14.973 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: when starting as multi_standby mode, we couldn't support data replicaton. 2025-05-28 03:51:14.986 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: [Alarm Module]can not read GAUSS_WARNING_TYPE env. 2025-05-28 03:51:14.986 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: [Alarm Module]Host Name: mogdb-nbpumz-mogdb-0 2025-05-28 03:51:14.986 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: [Alarm Module]Host IP: mogdb-nbpumz-mogdb-0. Copy hostname directly in case of taking 10s to use 'gethostbyname' when /etc/hosts does not contain 2025-05-28 03:51:14.986 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: [Alarm Module]Get ENV GS_CLUSTER_NAME failed! 2025-05-28 03:51:14.989 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: loaded library "security_plugin" 2025-05-28 03:51:14.991 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] WARNING: could not create any HA TCP/IP sockets 2025-05-28 03:51:14.996 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: InitNuma numaNodeNum: 1 numa_distribute_mode: none inheritThreadPool: 0. 2025-05-28 03:51:14.996 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: reserved memory for backend threads is: 220 MB 2025-05-28 03:51:14.996 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: reserved memory for WAL buffers is: 128 MB 2025-05-28 03:51:14.996 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: Set max backend reserve memory is: 348 MB, max dynamic memory is: 11070 MB 2025-05-28 03:51:14.996 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: shared memory 357 Mbytes, memory context 11418 Mbytes, max process memory 12288 Mbytes 2025-05-28 03:51:15.034 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [CACHE] LOG: set data cache size(402653184) 2025-05-28 03:51:15.171 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [SEGMENT_PAGE] LOG: Segment-page constants: DF_MAP_SIZE: 8156, DF_MAP_BIT_CNT: 65248, DF_MAP_GROUP_EXTENTS: 4175872, IPBLOCK_SIZE: 8168, EXTENTS_PER_IPBLOCK: 1021, IPBLOCK_GROUP_SIZE: 4090, BMT_HEADER_LEVEL0_TOTAL_PAGES: 8323072, BktMapEntryNumberPerBlock: 2038, BktMapBlockNumber: 49, BktBitMaxMapCnt: 512 2025-05-28 03:51:15.209 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: mogdb: fsync file "/var/lib/mogdb/data/gaussdb.state.temp" success 2025-05-28 03:51:15.209 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: create gaussdb state file success: db state(STARTING_STATE), server mode(Primary), connection index(1) 2025-05-28 03:51:15.239 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: max_safe_fds = 979, usable_fds = 1000, already_open = 11 2025-05-28 03:51:15.247 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: the configure file /usr/local/mogdb/etc/gscgroup_omm.cfg doesn't exist or the size of configure file has changed. Please create it by root user! 2025-05-28 03:51:15.247 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [BACKEND] LOG: Failed to parse cgroup config file. 2025-05-28 03:51:15.279 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [EXECUTOR] WARNING: Failed to obtain environment value $GAUSSLOG! 2025-05-28 03:51:15.279 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [EXECUTOR] DETAIL: N/A 2025-05-28 03:51:15.279 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [EXECUTOR] CAUSE: Incorrect environment value. 2025-05-28 03:51:15.279 [unknown] [unknown] localhost 139641596093440 0[0:0#0] 0 [EXECUTOR] ACTION: Please refer to backend log for more details. LB_TYPE is set to: intranet cluster expose check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster expose mogdb-nbpumz --auto-approve --force=true --type intranet --enable false --components mogdb --role-selector primary --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-expose-99m8t created successfully, you can view the progress: kbcli cluster describe-ops mogdb-nbpumz-expose-99m8t -n ns-jvbuz check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-expose-99m8t ns-jvbuz Expose mogdb-nbpumz mogdb Running 0/1 May 28,2025 12:06 UTC+0800 check cluster status `kbcli cluster list mogdb-nbpumz --show-labels --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mogdb-nbpumz ns-jvbuz WipeOut Running May 28,2025 11:39 UTC+0800 app.kubernetes.io/instance=mogdb-nbpumz check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mogdb-nbpumz --namespace ns-jvbuz ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mogdb-nbpumz-mogdb-0 ns-jvbuz mogdb-nbpumz mogdb Running primary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-11-169.us-west-2.compute.internal/172.31.11.169 May 28,2025 11:51 UTC+0800 mogdb-nbpumz-mogdb-1 ns-jvbuz mogdb-nbpumz mogdb Running secondary us-west-2a 1200m / 700m 1353082470400m / 1285973606400m data:5Gi ip-172-31-0-89.us-west-2.compute.internal/172.31.0.89 May 28,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done primary: mogdb-nbpumz-mogdb-0;secondary: mogdb-nbpumz-mogdb-1 `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check cluster connect `echo "gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d postgres -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check cluster connect done check ops status `kbcli cluster list-ops mogdb-nbpumz --status all --namespace ns-jvbuz ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mogdb-nbpumz-expose-99m8t ns-jvbuz Expose mogdb-nbpumz mogdb Succeed 1/1 May 28,2025 12:06 UTC+0800 check ops status done ops_status:mogdb-nbpumz-expose-99m8t ns-jvbuz Expose mogdb-nbpumz mogdb Succeed 1/1 May 28,2025 12:06 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mogdb-nbpumz-expose-99m8t --namespace ns-jvbuz ` opsrequest.operations.kubeblocks.io/mogdb-nbpumz-expose-99m8t patched `kbcli cluster delete-ops --name mogdb-nbpumz-expose-99m8t --force --auto-approve --namespace ns-jvbuz ` OpsRequest mogdb-nbpumz-expose-99m8t deleted `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h mogdb-nbpumz-mogdb.ns-jvbuz.svc.cluster.local -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-0 --namespace ns-jvbuz -- bash` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=mogdb-nbpumz` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mogdb-nbpumz-mogdb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:41696O66h*#xH7FI;DB_PORT:26000;DB_DATABASE: check readonly db_client batch data count `echo "echo \"select count(*) from executions_loop_table;\" | gsql -h 127.0.0.1 -U root -p 26000 -d executions_loop -W '41696O66h*#xH7FI' " | kubectl exec -it mogdb-nbpumz-mogdb-1 --namespace ns-jvbuz -- bash` check readonly db_client batch data Success delete cluster mogdb-nbpumz `kbcli cluster delete mogdb-nbpumz --auto-approve --namespace ns-jvbuz ` Cluster mogdb-nbpumz deleted pod_info:mogdb-nbpumz-mogdb-0 4/4 Terminating 0 16m mogdb-nbpumz-mogdb-1 4/4 Terminating 0 15m No resources found in ns-jvbuz namespace. delete cluster pod done No resources found in ns-jvbuz namespace. check cluster resource non-exist OK: pvc No resources found in ns-jvbuz namespace. delete cluster done No resources found in ns-jvbuz namespace. No resources found in ns-jvbuz namespace. No resources found in ns-jvbuz namespace. Mogdb Test Suite All Done! --------------------------------------Mogdb (Topology = Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[ComponentDefinition=mogdb-1.0.0-alpha.0;ComponentVersion=mogdb;ServiceVersion=5.0.5;]|[Description=Create a cluster with the specified component definition mogdb-1.0.0-alpha.0 and component version mogdb and service version 5.0.5] [PASSED]|[Connect]|[ComponentName=mogdb]|[Description=Connect to the cluster] [PASSED]|[Expose]|[Enable=true;TYPE=intranet;ComponentName=mogdb]|[Description=Expose Enable the intranet service with mogdb component] [PASSED]|[VerticalScaling]|[ComponentName=mogdb]|[Description=VerticalScaling the cluster specify component mogdb] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[HorizontalScaling Out]|[ComponentName=mogdb]|[Description=HorizontalScaling Out the cluster specify component mogdb] [PASSED]|[HorizontalScaling In]|[ComponentName=mogdb]|[Description=HorizontalScaling In the cluster specify component mogdb] [PASSED]|[Bench]|[ComponentName=mogdb]|[Description=Bench the cluster service with mogdb component] [PASSED]|[Bench]|[HostType=LB;ComponentName=mogdb]|[Description=Bench the cluster LB service with mogdb component] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Backup]|[BackupMethod=volume-snapshot]|[Description=The cluster volume-snapshot Backup] [PASSED]|[Restore]|[BackupMethod=volume-snapshot]|[Description=The cluster volume-snapshot Restore] [PASSED]|[Connect]|[ComponentName=mogdb]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=volume-snapshot]|[Description=Delete the volume-snapshot restore cluster] [PASSED]|[Expose]|[Disable=true;TYPE=intranet;ComponentName=mogdb]|[Description=Expose Disable the intranet service with mogdb component] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]