source commons files source engines files source kubeblocks files source kubedb files CLUSTER_NAME: `kubectl get namespace | grep ns-bhsuh ` `kubectl create namespace ns-bhsuh` namespace/ns-bhsuh created create namespace ns-bhsuh done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.1` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 33.6M 100 33.6M 0 0 197M 0 --:--:-- --:--:-- --:--:-- 197M kbcli installed successfully. Kubernetes: v1.32.6 KubeBlocks: 1.0.1 kbcli: 1.0.1 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.1 done Kubernetes: v1.32.6 KubeBlocks: 1.0.1 kbcli: 1.0.1 Kubernetes Env: v1.32.6 check snapshot controller check snapshot controller done POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default KubeBlocks version is:1.0.1 skip upgrade KubeBlocks current KubeBlocks version: 1.0.1 Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:postgresql set component version set component version:postgresql set service versions:17.5.0,16.9.0,16.4.0,15.13.0,15.7.0,14.18.0,14.8.0,14.7.2,12.22.0,12.15.0,12.14.1,12.14.0 set service versions sorted:12.14.0,12.14.1,12.15.0,12.22.0,14.7.2,14.8.0,14.18.0,15.7.0,15.13.0,16.4.0,16.9.0,17.5.0 set postgresql component definition set postgresql component definition postgresql-12-1.0.1 REPORT_COUNT 0:0 set replicas first:2,12.14.0|2,12.14.1|2,12.15.0|2,12.22.0|2,14.7.2|2,14.8.0|2,14.18.0|2,15.7.0|2,15.13.0|2,16.4.0|2,16.9.0|2,17.5.0 set replicas third:2,16.4.0 set replicas fourth:2,16.4.0 set minimum cmpv service version set minimum cmpv service version replicas:2,16.4.0 REPORT_COUNT:1 CLUSTER_TOPOLOGY:replication topology replication found in cluster definition postgresql set postgresql component definition set postgresql component definition postgresql-16-1.0.1 LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 3 CLUSTER_NAME:postgres-jvzjnz No resources found in ns-bhsuh namespace. pod_info: termination_policy:WipeOut create 2 replica WipeOut postgresql cluster check component definition set component definition by component version check cmpd by labels set component definition1: postgresql-16-1.0.1 by component version:postgresql apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: postgres-jvzjnz namespace: ns-bhsuh spec: clusterDef: postgresql topology: replication terminationPolicy: WipeOut componentSpecs: - name: postgresql serviceVersion: 16.4.0 labels: apps.kubeblocks.postgres.patroni/scope: postgres-jvzjnz-postgresql replicas: 2 disableExporter: true resources: limits: cpu: 100m memory: 0.5Gi requests: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi `kubectl apply -f test_create_postgres-jvzjnz.yaml` cluster.apps.kubeblocks.io/postgres-jvzjnz created apply test_create_postgres-jvzjnz.yaml Success `rm -rf test_create_postgres-jvzjnz.yaml` check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Creating Sep 11,2025 17:21 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:21 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:21 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=postgres-jvzjnz` set secret: postgres-jvzjnz-postgresql-account-postgres `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:NrMJ78303K;DB_PORT:5432;DB_DATABASE:postgres check pod postgres-jvzjnz-postgresql-1 container_name postgresql exist password NrMJ78303K check pod postgres-jvzjnz-postgresql-1 container_name pgbouncer exist password NrMJ78303K check pod postgres-jvzjnz-postgresql-1 container_name dbctl exist password NrMJ78303K check pod postgres-jvzjnz-postgresql-1 container_name kbagent exist password NrMJ78303K check pod postgres-jvzjnz-postgresql-1 container_name config-manager exist password NrMJ78303K No container logs contain secret password. describe cluster `kbcli cluster describe postgres-jvzjnz --namespace ns-bhsuh ` Name: postgres-jvzjnz Created Time: Sep 11,2025 17:21 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-bhsuh postgresql replication Running WipeOut Endpoints: COMPONENT INTERNAL EXTERNAL postgresql postgres-jvzjnz-postgresql-postgresql.ns-bhsuh.svc.cluster.local:5432 postgres-jvzjnz-postgresql-postgresql.ns-bhsuh.svc.cluster.local:6432 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME postgresql 16.4.0 postgres-jvzjnz-postgresql-0 secondary Running 0 aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:21 UTC+0800 postgresql 16.4.0 postgres-jvzjnz-postgresql-1 primary Running 0 aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:21 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS postgresql 100m / 100m 512Mi / 512Mi data:3Gi default Images: COMPONENT COMPONENT-DEFINITION IMAGE postgresql postgresql-16-1.0.1 docker.io/apecloud/spilo:16.4.0 docker.io/apecloud/pgbouncer:1.19.0 docker.io/apecloud/dbctl:0.2.0 docker.io/apecloud/kubeblocks-tools:1.0.1 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-bhsuh postgres-jvzjnz `kbcli cluster label postgres-jvzjnz app.kubernetes.io/instance- --namespace ns-bhsuh ` label "app.kubernetes.io/instance" not found. `kbcli cluster label postgres-jvzjnz app.kubernetes.io/instance=postgres-jvzjnz --namespace ns-bhsuh ` `kbcli cluster label postgres-jvzjnz --list --namespace ns-bhsuh ` NAME NAMESPACE LABELS postgres-jvzjnz ns-bhsuh app.kubernetes.io/instance=postgres-jvzjnz clusterdefinition.kubeblocks.io/name=postgresql label cluster app.kubernetes.io/instance=postgres-jvzjnz Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=postgres-jvzjnz --namespace ns-bhsuh ` `kbcli cluster label postgres-jvzjnz --list --namespace ns-bhsuh ` NAME NAMESPACE LABELS postgres-jvzjnz ns-bhsuh app.kubernetes.io/instance=postgres-jvzjnz case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=postgresql label cluster case.name=kbcli.test1 Success `kbcli cluster label postgres-jvzjnz case.name=kbcli.test2 --overwrite --namespace ns-bhsuh ` `kbcli cluster label postgres-jvzjnz --list --namespace ns-bhsuh ` NAME NAMESPACE LABELS postgres-jvzjnz ns-bhsuh app.kubernetes.io/instance=postgres-jvzjnz case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=postgresql label cluster case.name=kbcli.test2 Success `kbcli cluster label postgres-jvzjnz case.name- --namespace ns-bhsuh ` `kbcli cluster label postgres-jvzjnz --list --namespace ns-bhsuh ` NAME NAMESPACE LABELS postgres-jvzjnz ns-bhsuh app.kubernetes.io/instance=postgres-jvzjnz clusterdefinition.kubeblocks.io/name=postgresql delete cluster label case.name Success cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file CREATE EXTENSION Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+------------------------------------------------------------------------ file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.6 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 4.0.1 | public | similar to SET ROLE but with added logging vector | 0.7.4 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file max_connections ----------------- 56 (1 row) connect cluster Success set max_connections to 56 insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-postgres-jvzjnz" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-postgres-jvzjnz --namespace ns-bhsuh ` Error from server (NotFound): pods "test-db-client-executionloop-postgres-jvzjnz" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-postgres-jvzjnz" not found `kubectl get secrets -l app.kubernetes.io/instance=postgres-jvzjnz` set secret: postgres-jvzjnz-postgresql-account-postgres `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:NrMJ78303K;DB_PORT:5432;DB_DATABASE:postgres apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-postgres-jvzjnz namespace: ns-bhsuh spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "postgres-jvzjnz-postgresql-postgresql.ns-bhsuh.svc.cluster.local" - "--user" - "postgres" - "--password" - "NrMJ78303K" - "--port" - "5432" - "--dbtype" - "postgresql" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-postgres-jvzjnz.yaml` pod/test-db-client-executionloop-postgres-jvzjnz created apply test-db-client-executionloop-postgres-jvzjnz.yaml Success `rm -rf test-db-client-executionloop-postgres-jvzjnz.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 10s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 15s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 20s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 25s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 30s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 35s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 40s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 45s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 51s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 56s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 1/1 Running 0 61s check pod test-db-client-executionloop-postgres-jvzjnz status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-jvzjnz 0/1 Completed 0 66s check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:21 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:21 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done --host postgres-jvzjnz-postgresql-postgresql.ns-bhsuh.svc.cluster.local --user postgres --password NrMJ78303K --port 5432 --dbtype postgresql --test executionloop --duration 60 --interval 1 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.postgresql.jdbc.TimestampUtils (file:/app/oneclient-1.0-all.jar) to field java.util.TimeZone.defaultTimeZone WARNING: Please consider reporting this to the maintainers of org.postgresql.jdbc.TimestampUtils WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Execution loop start: create databases executions_loop CREATE DATABASE executions_loop; reconnect connection executions_loop drop table executions_loop_table DROP TABLE IF EXISTS executions_loop_table; create table executions_loop_table CREATE TABLE IF NOT EXISTS executions_loop_table (id SERIAL PRIMARY KEY, value TEXT, tinyint_col SMALLINT, smallint_col SMALLINT, integer_col INTEGER, bigint_col BIGINT, real_col REAL, double_col DOUBLE PRECISION, numeric_col NUMERIC(10, 2), date_col DATE, time_col TIME, timestamp_col TIMESTAMP, timestamptz_col TIMESTAMP WITH TIME ZONE, interval_col INTERVAL, boolean_col BOOLEAN, char_col CHAR(10), varchar_col VARCHAR(255), text_col TEXT, bytea_col BYTEA, uuid_col UUID, json_col JSON, jsonb_col JSONB, xml_col XML, enum_col VARCHAR(10) CHECK (enum_col IN ('Option1', 'Option2', 'Option3')), set_col VARCHAR(255) CHECK (set_col IN ('Value1', 'Value2', 'Value3')), int_array_col INTEGER[], text_array_col TEXT[], point_col POINT, line_col LINE, lseg_col LSEG, box_col BOX, path_col PATH, polygon_col POLYGON, circle_col CIRCLE, cidr_col CIDR, inet_col INET, macaddr_col MACADDR, macaddr8_col MACADDR8, bit_col BIT(8), bit_var_col BIT VARYING(8), varbit_col BIT VARYING(8), money_col MONEY, oid_col OID, regproc_col REGPROC, regprocedure_col REGPROCEDURE, regoper_col REGOPER, regoperator_col REGOPERATOR, regclass_col REGCLASS, regtype_col REGTYPE, regrole_col REGROLE, regnamespace_col REGNAMESPACE, regconfig_col REGCONFIG, regdictionary_col REGDICTIONARY ); Execution loop start:INSERT INTO executions_loop_table (value, tinyint_col, smallint_col, integer_col, bigint_col, real_col, double_col, numeric_col, date_col, time_col, timestamp_col, timestamptz_col, interval_col, boolean_col, char_col, varchar_col, text_col, bytea_col, uuid_col, json_col, jsonb_col, xml_col, enum_col, set_col, int_array_col, text_array_col, point_col, line_col, lseg_col, box_col, path_col, polygon_col, circle_col, cidr_col, inet_col, macaddr_col, macaddr8_col, bit_col, bit_var_col, varbit_col, money_col, oid_col, regproc_col, regprocedure_col, regoper_col, regoperator_col, regclass_col, regtype_col, regrole_col, regnamespace_col, regconfig_col, regdictionary_col) VALUES ('executions_loop_test_1', 98, 13228, 1892530608, -1776145803214215723, 0.6340306, 0.8045514066590833, 51.736465928570865, '2025-09-11', '09:24:16', '2025-09-11 09:24:16.548', CURRENT_TIMESTAMP, '20 hours 34 minutes 52 seconds', FALSE, 'rZ6z4fcMcV', 'olNeljBwnbhuUcAZrIINkWzSXCAX0OiWLVPED0coWLzbK2CfM1Nr3OexCSSiu8WtgXc78EJMpUyzhsLubtHHM384XslHjVPe6k7aPnToSobhgoLMUQ1gqgQwQesRPyxwpwlxDEENwaym77Vq5KddLIWnij02vEsHZXqZjnk5B42sFLh77vGqkXnNPTtE5cw3Nh2HWfXARGksZlQVm5Z90Fn1X22msdbSF84J47FzsxzXH8sciEPPbBn54okRrjC', '9G9bl5YlttyHwh4MolzsphfeVlmV2hYmjoeC95spzfnkyOyMCg4LV4oyjQfEO9lxpX2POBD7ljtRJwv2XVKAE9tqPzVlodrJ3EnFTMz2B8nFv197WPoBwsMSaei1IKo2ybYWOlOOnSc5cFX1TeVLtt7jrx0tc8tcRquGngEUkgMsbw3OtRTVRKmpx4pLLnNdlUhGSTJRUdm7RWOOz1lam96RDSiJXAdJZWq4g747kXzD0hQPSsBx54SlObRYTdV', decode('8ee608e50609e0dea836', 'hex'), 'd88a1ed2-2dc0-444f-9cd4-018a22ec77de', '***"key1": "CzTF4FmcBe", "key2": 51***', '***"key1": "zrjTQe3xGl", "key2": 98***', 'X0KYGsMG9o69', 'Option2', 'Value3', ARRAY[0, 13, 50], ARRAY['j9DZp2nGMR', 'MktHXjDRv8', 'lgSrgkyVi8'], '(76.32317719836382, 15.958330069996673)', '***40.63464863538178, 45.441014017160995, 78.49072378538278***', '[(45.14526459193725, 66.59573988976337), (3.8711671252581903, 60.43402262022152)]', '((71.28927337595648, 34.49215459158258), (23.521780210578868, 46.52310710826969))', '((22.16302318919562, 14.484352572353565), (21.367426301360293, 40.06296472101039), (51.06626586427871, 12.455330939537424))', '((50.6901081984552, 56.56735003627341), (5.271926753040434, 22.624426752239735), (10.600336071439331, 41.224199558390794), (65.10868688286887, 26.335441063391272))', '<(44.066424, 45.330293), 8.048795>', '192.168.26.0/24', '192.168.103.15', '08:00:2b:01:02:03', '08:00:2b:01:02:03:04:05', B'10101010', B'10101010', B'10101010', '$358.6191619773964', 1193205448, 'acos', abs(1), '#-', +1, 'pg_class', 'integer', 'postgres', 'pg_catalog', 'simple', 'english_stem' ); [ 1s ] executions total: 1 successful: 1 failed: 0 disconnect: 0 [ 2s ] executions total: 78 successful: 78 failed: 0 disconnect: 0 [ 3s ] executions total: 153 successful: 153 failed: 0 disconnect: 0 [ 4s ] executions total: 255 successful: 255 failed: 0 disconnect: 0 [ 5s ] executions total: 363 successful: 363 failed: 0 disconnect: 0 [ 6s ] executions total: 474 successful: 474 failed: 0 disconnect: 0 [ 7s ] executions total: 527 successful: 527 failed: 0 disconnect: 0 [ 8s ] executions total: 634 successful: 634 failed: 0 disconnect: 0 [ 9s ] executions total: 747 successful: 747 failed: 0 disconnect: 0 [ 10s ] executions total: 864 successful: 864 failed: 0 disconnect: 0 [ 11s ] executions total: 980 successful: 980 failed: 0 disconnect: 0 [ 12s ] executions total: 1047 successful: 1047 failed: 0 disconnect: 0 [ 13s ] executions total: 1149 successful: 1149 failed: 0 disconnect: 0 [ 14s ] executions total: 1260 successful: 1260 failed: 0 disconnect: 0 [ 15s ] executions total: 1370 successful: 1370 failed: 0 disconnect: 0 [ 16s ] executions total: 1476 successful: 1476 failed: 0 disconnect: 0 [ 17s ] executions total: 1584 successful: 1584 failed: 0 disconnect: 0 [ 18s ] executions total: 1688 successful: 1688 failed: 0 disconnect: 0 [ 19s ] executions total: 1813 successful: 1813 failed: 0 disconnect: 0 [ 20s ] executions total: 1931 successful: 1931 failed: 0 disconnect: 0 [ 21s ] executions total: 2040 successful: 2040 failed: 0 disconnect: 0 [ 22s ] executions total: 2093 successful: 2093 failed: 0 disconnect: 0 [ 23s ] executions total: 2106 successful: 2106 failed: 0 disconnect: 0 [ 24s ] executions total: 2138 successful: 2138 failed: 0 disconnect: 0 [ 25s ] executions total: 2241 successful: 2241 failed: 0 disconnect: 0 [ 26s ] executions total: 2343 successful: 2343 failed: 0 disconnect: 0 [ 27s ] executions total: 2448 successful: 2448 failed: 0 disconnect: 0 [ 28s ] executions total: 2550 successful: 2550 failed: 0 disconnect: 0 [ 29s ] executions total: 2665 successful: 2665 failed: 0 disconnect: 0 [ 30s ] executions total: 2784 successful: 2784 failed: 0 disconnect: 0 [ 31s ] executions total: 2903 successful: 2903 failed: 0 disconnect: 0 [ 32s ] executions total: 2973 successful: 2973 failed: 0 disconnect: 0 [ 33s ] executions total: 3067 successful: 3067 failed: 0 disconnect: 0 [ 34s ] executions total: 3172 successful: 3172 failed: 0 disconnect: 0 [ 35s ] executions total: 3273 successful: 3273 failed: 0 disconnect: 0 [ 36s ] executions total: 3379 successful: 3379 failed: 0 disconnect: 0 [ 37s ] executions total: 3411 successful: 3411 failed: 0 disconnect: 0 [ 38s ] executions total: 3430 successful: 3430 failed: 0 disconnect: 0 [ 39s ] executions total: 3500 successful: 3500 failed: 0 disconnect: 0 [ 40s ] executions total: 3606 successful: 3606 failed: 0 disconnect: 0 [ 41s ] executions total: 3709 successful: 3709 failed: 0 disconnect: 0 [ 42s ] executions total: 3769 successful: 3769 failed: 0 disconnect: 0 [ 43s ] executions total: 3847 successful: 3847 failed: 0 disconnect: 0 [ 44s ] executions total: 3931 successful: 3931 failed: 0 disconnect: 0 [ 45s ] executions total: 4011 successful: 4011 failed: 0 disconnect: 0 [ 46s ] executions total: 4096 successful: 4096 failed: 0 disconnect: 0 [ 47s ] executions total: 4179 successful: 4179 failed: 0 disconnect: 0 [ 48s ] executions total: 4265 successful: 4265 failed: 0 disconnect: 0 [ 49s ] executions total: 4345 successful: 4345 failed: 0 disconnect: 0 [ 50s ] executions total: 4439 successful: 4439 failed: 0 disconnect: 0 [ 51s ] executions total: 4533 successful: 4533 failed: 0 disconnect: 0 [ 52s ] executions total: 4553 successful: 4553 failed: 0 disconnect: 0 [ 53s ] executions total: 4570 successful: 4570 failed: 0 disconnect: 0 [ 54s ] executions total: 4584 successful: 4584 failed: 0 disconnect: 0 [ 55s ] executions total: 4606 successful: 4606 failed: 0 disconnect: 0 [ 56s ] executions total: 4642 successful: 4642 failed: 0 disconnect: 0 [ 57s ] executions total: 4738 successful: 4738 failed: 0 disconnect: 0 [ 60s ] executions total: 4819 successful: 4819 failed: 0 disconnect: 0 Test Result: Total Executions: 4819 Successful Executions: 4819 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: postgresql Host: postgres-jvzjnz-postgresql-postgresql.ns-bhsuh.svc.cluster.local Port: 5432 Database: Table: User: postgres Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 4819 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-postgres-jvzjnz --namespace ns-bhsuh ` pod/test-db-client-executionloop-postgres-jvzjnz patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-postgres-jvzjnz" force deleted LB_TYPE is set to: intranet cluster expose check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster expose postgres-jvzjnz --auto-approve --force=true --type intranet --enable true --components postgresql --role-selector primary --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-expose-chcct created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-expose-chcct -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz Creating -/- Sep 11,2025 17:25 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:21 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:21 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 17:25 UTC+0800 check ops status done ops_status:postgres-jvzjnz-expose-chcct ns-bhsuh Expose postgres-jvzjnz postgresql Succeed 1/1 Sep 11,2025 17:25 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-expose-chcct --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-expose-chcct patched `kbcli cluster delete-ops --name postgres-jvzjnz-expose-chcct --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-expose-chcct deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover oom check cluster status before cluster-failover-oom check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-oom-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 mode: all stressors: memory: workers: 1 size: "100GB" oomScoreAdj: -1000 duration: 2m `kubectl apply -f test-chaos-mesh-oom-postgres-jvzjnz.yaml` Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found stresschaos.chaos-mesh.org/test-chaos-mesh-oom-postgres-jvzjnz created apply test-chaos-mesh-oom-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-oom-postgres-jvzjnz.yaml` check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:21 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:21 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" force deleted stresschaos.chaos-mesh.org/test-chaos-mesh-oom-postgres-jvzjnz patched check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 checking failover... `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-oom-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 mode: all stressors: memory: workers: 1 size: "100GB" oomScoreAdj: -1000 duration: 2m `kubectl apply -f test-chaos-mesh-oom-postgres-jvzjnz.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-oom-postgres-jvzjnz created apply test-chaos-mesh-oom-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-oom-postgres-jvzjnz.yaml` check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:21 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:21 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" force deleted stresschaos.chaos-mesh.org/test-chaos-mesh-oom-postgres-jvzjnz patched failover pod name:postgres-jvzjnz-postgresql-1 checking failover... `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-oom-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 mode: all stressors: memory: workers: 1 size: "100GB" oomScoreAdj: -1000 duration: 2m `kubectl apply -f test-chaos-mesh-oom-postgres-jvzjnz.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-oom-postgres-jvzjnz created apply test-chaos-mesh-oom-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-oom-postgres-jvzjnz.yaml` check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Failed cluster_status:Failed cluster_status:Failed cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:21 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:21 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-jvzjnz" force deleted stresschaos.chaos-mesh.org/test-chaos-mesh-oom-postgres-jvzjnz patched check failover pod name:postgres-jvzjnz-postgresql-0 failover oom Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover fullcpu check cluster status before cluster-failover-fullcpu check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpu-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-postgres-jvzjnz" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-fullcpu-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-0 mode: all stressors: cpu: workers: 100 load: 100 duration: 2m `kubectl apply -f test-chaos-mesh-fullcpu-postgres-jvzjnz.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-fullcpu-postgres-jvzjnz created apply test-chaos-mesh-fullcpu-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-fullcpu-postgres-jvzjnz.yaml` fullcpu chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:21 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:21 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpu-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-postgres-jvzjnz" force deleted Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-postgres-jvzjnz" not found check failover pod name failover pod name:postgres-jvzjnz-postgresql-0 failover fullcpu Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop postgres-jvzjnz --auto-approve --force=true --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-stop-nkxkt created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-stop-nkxkt -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-stop-nkxkt ns-bhsuh Stop postgres-jvzjnz Running -/- Sep 11,2025 17:32 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Stopping Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-stop-nkxkt ns-bhsuh Stop postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 17:32 UTC+0800 check ops status done ops_status:postgres-jvzjnz-stop-nkxkt ns-bhsuh Stop postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 17:32 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-stop-nkxkt --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-stop-nkxkt patched `kbcli cluster delete-ops --name postgres-jvzjnz-stop-nkxkt --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-stop-nkxkt deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start postgres-jvzjnz --force=true --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-start-trnpr created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-start-trnpr -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-start-trnpr ns-bhsuh Start postgres-jvzjnz Creating -/- Sep 11,2025 17:33 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:33 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:33 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-start-trnpr ns-bhsuh Start postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 17:33 UTC+0800 check ops status done ops_status:postgres-jvzjnz-start-trnpr ns-bhsuh Start postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 17:33 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-start-trnpr --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-start-trnpr patched `kbcli cluster delete-ops --name postgres-jvzjnz-start-trnpr --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-start-trnpr deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success `kubectl get pvc -l app.kubernetes.io/instance=postgres-jvzjnz,apps.kubeblocks.io/component-name=postgresql,apps.kubeblocks.io/vct-name=data --namespace ns-bhsuh ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in postgres-jvzjnz namespace. `kbcli cluster volume-expand postgres-jvzjnz --auto-approve --force=true --components postgresql --volume-claim-templates data --storage 7Gi --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-volumeexpansion-vglqx created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-volumeexpansion-vglqx -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-volumeexpansion-vglqx ns-bhsuh VolumeExpansion postgres-jvzjnz postgresql Creating -/- Sep 11,2025 17:40 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:33 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:33 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done No resources found in postgres-jvzjnz namespace. check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-volumeexpansion-vglqx ns-bhsuh VolumeExpansion postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 17:40 UTC+0800 check ops status done ops_status:postgres-jvzjnz-volumeexpansion-vglqx ns-bhsuh VolumeExpansion postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 17:40 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-volumeexpansion-vglqx --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-volumeexpansion-vglqx patched `kbcli cluster delete-ops --name postgres-jvzjnz-volumeexpansion-vglqx --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-volumeexpansion-vglqx deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkbandwidthover check cluster status before cluster-failover-networkbandwidthover check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-postgres-jvzjnz" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkbandwidthover-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 action: bandwidth mode: all bandwidth: rate: '1bps' limit: 20971520 buffer: 10000 duration: 2m `kubectl apply -f test-chaos-mesh-networkbandwidthover-postgres-jvzjnz.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkbandwidthover-postgres-jvzjnz created apply test-chaos-mesh-networkbandwidthover-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-networkbandwidthover-postgres-jvzjnz.yaml` networkbandwidthover chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:33 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:33 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-postgres-jvzjnz" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-postgres-jvzjnz" not found check failover pod name failover pod name:postgres-jvzjnz-postgresql-0 failover networkbandwidthover Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test switchover cluster promote check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster promote postgres-jvzjnz --auto-approve --force=true --instance postgres-jvzjnz-postgresql-0 --candidate postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-switchover-km9vg created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-switchover-km9vg -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-switchover-km9vg ns-bhsuh Switchover postgres-jvzjnz postgres-jvzjnz-postgresql Running -/- Sep 11,2025 17:49 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:33 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:33 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-switchover-km9vg ns-bhsuh Switchover postgres-jvzjnz postgres-jvzjnz-postgresql Succeed 1/1 Sep 11,2025 17:49 UTC+0800 check ops status done ops_status:postgres-jvzjnz-switchover-km9vg ns-bhsuh Switchover postgres-jvzjnz postgres-jvzjnz-postgresql Succeed 1/1 Sep 11,2025 17:49 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-switchover-km9vg --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-switchover-km9vg patched `kbcli cluster delete-ops --name postgres-jvzjnz-switchover-km9vg --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-switchover-km9vg deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success switchover pod:postgres-jvzjnz-postgresql-1 switchover success `kubectl get secrets -l app.kubernetes.io/instance=postgres-jvzjnz` set secret: postgres-jvzjnz-postgresql-account-postgres `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:NrMJ78303K;DB_PORT:5432;DB_DATABASE:postgres `create database benchtest;` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file NOTICE: database "benchtest" does not exist, skipping return msg:DROP DATABASE CREATE DATABASE apiVersion: v1 kind: Pod metadata: name: benchtest-postgres-jvzjnz namespace: ns-bhsuh spec: containers: - name: test-sysbench imagePullPolicy: IfNotPresent image: docker.io/apecloud/customsuites:latest env: - name: TYPE value: "2" - name: FLAG value: "0" - name: CONFIGS value: "mode:all,driver:pgsql,host:postgres-jvzjnz-postgresql-postgresql.ns-bhsuh.svc.cluster.local,user:postgres,password:NrMJ78303K,port:5432,db:benchtest,tables:5,threads:4,times:10,size:1000,type:oltp_read_write" restartPolicy: Never `kubectl apply -f benchtest-postgres-jvzjnz.yaml` pod/benchtest-postgres-jvzjnz created apply benchtest-postgres-jvzjnz.yaml Success check pod status pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 0/1 ContainerCreating 0 0s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 1/1 Running 0 4s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 1/1 Running 0 9s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 1/1 Running 0 14s check pod benchtest-postgres-jvzjnz status done pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 0/1 Completed 0 20s `rm -rf benchtest-postgres-jvzjnz.yaml` `kubectl logs benchtest-postgres-jvzjnz --tail 30 --namespace ns-bhsuh ` [ 7s ] thds: 4 tps: 21.00 qps: 399.99 (r/w/o: 270.99/87.00/42.00) lat (ms,99%): 493.24 err/s: 0.00 reconn/s: 0.00 [ 8s ] thds: 4 tps: 21.00 qps: 447.02 (r/w/o: 318.01/87.00/42.00) lat (ms,99%): 601.29 err/s: 0.00 reconn/s: 0.00 [ 9s ] thds: 4 tps: 27.00 qps: 528.06 (r/w/o: 373.05/98.01/57.01) lat (ms,99%): 707.07 err/s: 1.00 reconn/s: 0.00 [ 10s ] thds: 4 tps: 22.99 qps: 461.84 (r/w/o: 315.89/99.96/45.98) lat (ms,99%): 502.20 err/s: 0.00 reconn/s: 0.00 SQL statistics: queries performed: read: 2926 write: 828 other: 422 total: 4176 transactions: 207 (20.29 per sec.) queries: 4176 (409.26 per sec.) ignored errors: 2 (0.20 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 10.2021s total number of events: 207 Latency (ms): min: 4.35 avg: 194.72 max: 796.60 99th percentile: 694.45 sum: 40306.22 Threads fairness: events (avg/stddev): 51.7500/7.50 execution time (avg/stddev): 10.0766/0.08 `kubectl delete pod benchtest-postgres-jvzjnz --force --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "benchtest-postgres-jvzjnz" force deleted LB_TYPE is set to: intranet No resources found in ns-bhsuh namespace. `kubectl get secrets -l app.kubernetes.io/instance=postgres-jvzjnz` set secret: postgres-jvzjnz-postgresql-account-postgres `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:NrMJ78303K;DB_PORT:5432;DB_DATABASE:postgres `create database benchtest;` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file return msg:DROP DATABASE CREATE DATABASE apiVersion: v1 kind: Pod metadata: name: benchtest-postgres-jvzjnz namespace: ns-bhsuh spec: containers: - name: test-sysbench imagePullPolicy: IfNotPresent image: docker.io/apecloud/customsuites:latest env: - name: TYPE value: "2" - name: FLAG value: "0" - name: CONFIGS value: "mode:all,driver:pgsql,host:10.224.0.8,user:postgres,password:NrMJ78303K,port:5432,db:benchtest,tables:5,threads:4,times:10,size:1000,type:oltp_read_write" restartPolicy: Never `kubectl apply -f benchtest-postgres-jvzjnz.yaml` pod/benchtest-postgres-jvzjnz created apply benchtest-postgres-jvzjnz.yaml Success check pod status pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 0/1 ContainerCreating 0 0s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 1/1 Running 0 5s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 1/1 Running 0 10s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 1/1 Running 0 15s check pod benchtest-postgres-jvzjnz status done pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-jvzjnz 0/1 Completed 0 20s `rm -rf benchtest-postgres-jvzjnz.yaml` `kubectl logs benchtest-postgres-jvzjnz --tail 30 --namespace ns-bhsuh ` [ 7s ] thds: 4 tps: 15.00 qps: 289.93 (r/w/o: 197.95/59.98/31.99) lat (ms,99%): 502.20 err/s: 1.00 reconn/s: 0.00 [ 8s ] thds: 4 tps: 9.00 qps: 213.00 (r/w/o: 159.00/34.00/20.00) lat (ms,99%): 1109.09 err/s: 1.00 reconn/s: 0.00 [ 9s ] thds: 4 tps: 26.00 qps: 497.09 (r/w/o: 337.06/108.02/52.01) lat (ms,99%): 995.51 err/s: 0.00 reconn/s: 0.00 [ 10s ] thds: 4 tps: 26.00 qps: 574.00 (r/w/o: 410.00/108.00/56.00) lat (ms,99%): 397.39 err/s: 2.00 reconn/s: 0.00 SQL statistics: queries performed: read: 2786 write: 783 other: 401 total: 3970 transactions: 194 (18.83 per sec.) queries: 3970 (385.32 per sec.) ignored errors: 5 (0.49 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 10.3016s total number of events: 194 Latency (ms): min: 4.22 avg: 209.30 max: 1102.24 99th percentile: 893.56 sum: 40603.90 Threads fairness: events (avg/stddev): 48.5000/3.50 execution time (avg/stddev): 10.1510/0.15 `kubectl delete pod benchtest-postgres-jvzjnz --force --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "benchtest-postgres-jvzjnz" force deleted test failover networkdelay check cluster status before cluster-failover-networkdelay check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelay-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-postgres-jvzjnz" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkdelay-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 mode: all action: delay delay: latency: 2000ms correlation: '100' jitter: 0ms direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkdelay-postgres-jvzjnz.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkdelay-postgres-jvzjnz created apply test-chaos-mesh-networkdelay-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-networkdelay-postgres-jvzjnz.yaml` networkdelay chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:33 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:33 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelay-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-postgres-jvzjnz" force deleted networkchaos.chaos-mesh.org/test-chaos-mesh-networkdelay-postgres-jvzjnz patched check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover networkdelay Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networklossover check cluster status before cluster-failover-networklossover check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-postgres-jvzjnz" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networklossover-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 mode: all action: loss loss: loss: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networklossover-postgres-jvzjnz.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networklossover-postgres-jvzjnz created apply test-chaos-mesh-networklossover-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-networklossover-postgres-jvzjnz.yaml` networklossover chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:33 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:33 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-postgres-jvzjnz" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-postgres-jvzjnz" not found check failover pod name failover pod name:postgres-jvzjnz-postgresql-0 failover networklossover Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover kill1 check cluster status before cluster-failover-kill1 check cluster status done cluster_status:Running `kill 1` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file exec return message: check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:33 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 17:33 UTC+0800 check pod status done check cluster role No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover kill1 Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster configure component_tmp: postgresql apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-jvzjnz-reconfiguring- namespace: ns-bhsuh spec: type: Reconfiguring clusterName: postgres-jvzjnz force: true reconfigures: - componentName: postgresql parameters: - key: shared_buffers value: '512MB' check cluster status before ops cluster_status:Updating check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-jvzjnz.yaml` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-reconfiguring-bxdql created create test_ops_cluster_postgres-jvzjnz.yaml Success `rm -rf test_ops_cluster_postgres-jvzjnz.yaml` check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-reconfiguring-bxdql ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Running -/- Sep 11,2025 17:58 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:59 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:00 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-reconfiguring-bxdql ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 17:58 UTC+0800 check ops status done ops_status:postgres-jvzjnz-reconfiguring-bxdql ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 17:58 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-reconfiguring-bxdql --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-reconfiguring-bxdql patched `kbcli cluster delete-ops --name postgres-jvzjnz-reconfiguring-bxdql --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-reconfiguring-bxdql deleted component_config:postgresql check config variables Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file current value_actual: 512MB configure:[shared_buffers] result actual:[512MB] equal expected:[512MB] check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkpartition check cluster status before cluster-failover-networkpartition check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-postgres-jvzjnz" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkpartition-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-0 action: partition mode: all target: mode: all selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkpartition-postgres-jvzjnz.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkpartition-postgres-jvzjnz created apply test-chaos-mesh-networkpartition-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-networkpartition-postgres-jvzjnz.yaml` networkpartition chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:59 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:00 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-postgres-jvzjnz" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-postgres-jvzjnz" not found check failover pod name failover pod name:postgres-jvzjnz-postgresql-0 failover networkpartition Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover podfailure check cluster status before cluster-failover-podfailure check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-postgres-jvzjnz" not found Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podfailure-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-0 mode: all action: pod-failure duration: 2m `kubectl apply -f test-chaos-mesh-podfailure-postgres-jvzjnz.yaml` podchaos.chaos-mesh.org/test-chaos-mesh-podfailure-postgres-jvzjnz created apply test-chaos-mesh-podfailure-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-podfailure-postgres-jvzjnz.yaml` podfailure chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Failed Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Failed cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:59 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:00 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-postgres-jvzjnz" force deleted podchaos.chaos-mesh.org/test-chaos-mesh-podfailure-postgres-jvzjnz patched check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover podfailure Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover connectionstress check cluster status before cluster-failover-connectionstress check cluster status done cluster_status:Running Error from server (NotFound): pods "test-db-client-connectionstress-postgres-jvzjnz" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-postgres-jvzjnz --namespace ns-bhsuh ` Error from server (NotFound): pods "test-db-client-connectionstress-postgres-jvzjnz" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-postgres-jvzjnz" not found `kubectl get secrets -l app.kubernetes.io/instance=postgres-jvzjnz` set secret: postgres-jvzjnz-postgresql-account-postgres `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:NrMJ78303K;DB_PORT:5432;DB_DATABASE:postgres apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-postgres-jvzjnz namespace: ns-bhsuh spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "postgres-jvzjnz-postgresql-postgresql.ns-bhsuh.svc.cluster.local" - "--user" - "postgres" - "--password" - "NrMJ78303K" - "--port" - "5432" - "--database" - "postgres" - "--dbtype" - "postgresql" - "--test" - "connectionstress" - "--connections" - "56" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-postgres-jvzjnz.yaml` pod/test-db-client-connectionstress-postgres-jvzjnz created apply test-db-client-connectionstress-postgres-jvzjnz.yaml Success `rm -rf test-db-client-connectionstress-postgres-jvzjnz.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-postgres-jvzjnz 1/1 Running 0 5s check pod test-db-client-connectionstress-postgres-jvzjnz status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-postgres-jvzjnz 0/1 Completed 0 9s check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:59 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:00 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:64) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:115) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) java.io.IOException: Failed to connect to PostgreSQL database: at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:66) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:115) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:64) ... 4 more Sep 11, 2025 10:05:54 AM org.postgresql.Driver connect SEVERE: Connection error: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:59) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:115) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Failed to connect to PostgreSQL database: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already Trying with database PostgreSQL. Sep 11, 2025 10:05:54 AM org.postgresql.Driver connect SEVERE: Connection error: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:64) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:115) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) java.io.IOException: Failed to connect to PostgreSQL database: at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:66) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:115) at com.apecloud.dbtester.commons.TestExecutor.executeTest(TestExecutor.java:37) at OneClient.executeTest(OneClient.java:108) at OneClient.main(OneClient.java:40) Caused by: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:64) ... 4 more Test Result: null Connection Information: Database Type: postgresql Host: postgres-jvzjnz-postgresql-postgresql.ns-bhsuh.svc.cluster.local Port: 5432 Database: postgres Table: User: postgres Org: Access Mode: mysql Test Type: connectionstress Connection Count: 56 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-postgres-jvzjnz --namespace ns-bhsuh ` pod/test-db-client-connectionstress-postgres-jvzjnz patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-postgres-jvzjnz" force deleted check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover connectionstress Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover dnserror check cluster status before cluster-failover-dnserror check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-postgres-jvzjnz" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnserror-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 mode: all action: error duration: 2m `kubectl apply -f test-chaos-mesh-dnserror-postgres-jvzjnz.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnserror-postgres-jvzjnz created apply test-chaos-mesh-dnserror-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-dnserror-postgres-jvzjnz.yaml` dnserror chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:59 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:00 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-postgres-jvzjnz" force deleted Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-postgres-jvzjnz" not found check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover dnserror Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster postgresql scale-out cluster postgresql scale-out replicas: 3 check cluster status before ops check cluster status done cluster_status:Running No resources found in postgres-jvzjnz namespace. `kbcli cluster scale-out postgres-jvzjnz --auto-approve --force=true --components postgresql --replicas 1 --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-horizontalscaling-jfvdk created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-horizontalscaling-jfvdk -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-horizontalscaling-jfvdk ns-bhsuh HorizontalScaling postgres-jvzjnz postgresql Creating -/- Sep 11,2025 18:08 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:59 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:00 UTC+0800 postgres-jvzjnz-postgresql-2 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:08 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-2 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done No resources found in postgres-jvzjnz namespace. check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-horizontalscaling-jfvdk ns-bhsuh HorizontalScaling postgres-jvzjnz postgresql Succeed 1/1 Sep 11,2025 18:08 UTC+0800 check ops status done ops_status:postgres-jvzjnz-horizontalscaling-jfvdk ns-bhsuh HorizontalScaling postgres-jvzjnz postgresql Succeed 1/1 Sep 11,2025 18:08 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-horizontalscaling-jfvdk --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-horizontalscaling-jfvdk patched `kbcli cluster delete-ops --name postgres-jvzjnz-horizontalscaling-jfvdk --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-horizontalscaling-jfvdk deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster postgresql scale-in cluster postgresql scale-in replicas: 2 check cluster status before ops check cluster status done cluster_status:Running No resources found in postgres-jvzjnz namespace. `kbcli cluster scale-in postgres-jvzjnz --auto-approve --force=true --components postgresql --replicas 1 --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-horizontalscaling-jpbs8 created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-horizontalscaling-jpbs8 -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-horizontalscaling-jpbs8 ns-bhsuh HorizontalScaling postgres-jvzjnz postgresql Creating -/- Sep 11,2025 18:10 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 17:59 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:00 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done No resources found in postgres-jvzjnz namespace. check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-horizontalscaling-jpbs8 ns-bhsuh HorizontalScaling postgres-jvzjnz postgresql Succeed 1/1 Sep 11,2025 18:10 UTC+0800 check ops status done ops_status:postgres-jvzjnz-horizontalscaling-jpbs8 ns-bhsuh HorizontalScaling postgres-jvzjnz postgresql Succeed 1/1 Sep 11,2025 18:10 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-horizontalscaling-jpbs8 --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-horizontalscaling-jpbs8 patched `kbcli cluster delete-ops --name postgres-jvzjnz-horizontalscaling-jpbs8 --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-horizontalscaling-jpbs8 deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale postgres-jvzjnz --auto-approve --force=true --components postgresql --cpu 200m --memory 0.6Gi --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-verticalscaling-m2m88 created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-verticalscaling-m2m88 -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-verticalscaling-m2m88 ns-bhsuh VerticalScaling postgres-jvzjnz postgresql Creating -/- Sep 11,2025 18:11 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-verticalscaling-m2m88 ns-bhsuh VerticalScaling postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 18:11 UTC+0800 check ops status done ops_status:postgres-jvzjnz-verticalscaling-m2m88 ns-bhsuh VerticalScaling postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 18:11 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-verticalscaling-m2m88 --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-verticalscaling-m2m88 patched `kbcli cluster delete-ops --name postgres-jvzjnz-verticalscaling-m2m88 --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-verticalscaling-m2m88 deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster configure component_tmp: postgresql apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-jvzjnz-reconfiguring- namespace: ns-bhsuh spec: type: Reconfiguring clusterName: postgres-jvzjnz force: true reconfigures: - componentName: postgresql parameters: - key: max_connections value: '200' check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-jvzjnz.yaml` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-reconfiguring-bjvjm created create test_ops_cluster_postgres-jvzjnz.yaml Success `rm -rf test_ops_cluster_postgres-jvzjnz.yaml` check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-reconfiguring-bjvjm ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Creating -/- Sep 11,2025 18:14 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-reconfiguring-bjvjm ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:14 UTC+0800 check ops status done ops_status:postgres-jvzjnz-reconfiguring-bjvjm ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:14 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-reconfiguring-bjvjm --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-reconfiguring-bjvjm patched `kbcli cluster delete-ops --name postgres-jvzjnz-reconfiguring-bjvjm --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-reconfiguring-bjvjm deleted component_config:postgresql check config variables Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file current value_actual: 200 configure:[max_connections] result actual:[200] equal expected:[200] check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover dnsrandom check cluster status before cluster-failover-dnsrandom check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-postgres-jvzjnz" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnsrandom-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-0 mode: all action: random duration: 2m `kubectl apply -f test-chaos-mesh-dnsrandom-postgres-jvzjnz.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnsrandom-postgres-jvzjnz created apply test-chaos-mesh-dnsrandom-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-dnsrandom-postgres-jvzjnz.yaml` dnsrandom chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-postgres-jvzjnz" force deleted dnschaos.chaos-mesh.org/test-chaos-mesh-dnsrandom-postgres-jvzjnz patched check failover pod name failover pod name:postgres-jvzjnz-postgresql-0 failover dnsrandom Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover mistake check cluster status before cluster-failover-mistake check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge IOChaos test-chaos-mesh-mistake-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): iochaos.chaos-mesh.org "test-chaos-mesh-mistake-postgres-jvzjnz" not found Error from server (NotFound): iochaos.chaos-mesh.org "test-chaos-mesh-mistake-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: IOChaos metadata: name: test-chaos-mesh-mistake-postgres-jvzjnz namespace: ns-bhsuh spec: action: mistake mode: all selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-0 volumePath: /home/postgres/pgdata path: '/home/postgres/pgdata/**/*' mistake: filling: zero maxOccurrences: 1 maxLength: 10 methods: - READ - WRITE percent: 100 duration: 2m `kubectl apply -f test-chaos-mesh-mistake-postgres-jvzjnz.yaml` iochaos.chaos-mesh.org/test-chaos-mesh-mistake-postgres-jvzjnz created apply test-chaos-mesh-mistake-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-mistake-postgres-jvzjnz.yaml` mistake chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge IOChaos test-chaos-mesh-mistake-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. iochaos.chaos-mesh.org "test-chaos-mesh-mistake-postgres-jvzjnz" force deleted Error from server (NotFound): iochaos.chaos-mesh.org "test-chaos-mesh-mistake-postgres-jvzjnz" not found check failover pod name failover pod name:postgres-jvzjnz-postgresql-0 failover mistake Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover timeoffset check cluster status before cluster-failover-timeoffset check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-postgres-jvzjnz" not found Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: TimeChaos metadata: name: test-chaos-mesh-timeoffset-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-0 mode: all timeOffset: '-10m' clockIds: - CLOCK_REALTIME duration: 2m `kubectl apply -f test-chaos-mesh-timeoffset-postgres-jvzjnz.yaml` timechaos.chaos-mesh.org/test-chaos-mesh-timeoffset-postgres-jvzjnz created apply test-chaos-mesh-timeoffset-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-timeoffset-postgres-jvzjnz.yaml` timeoffset chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-postgres-jvzjnz" force deleted Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-postgres-jvzjnz" not found check failover pod name failover pod name:postgres-jvzjnz-postgresql-0 failover timeoffset Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkcorruptover check cluster status before cluster-failover-networkcorruptover check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-postgres-jvzjnz" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkcorruptover-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-0 mode: all action: corrupt corrupt: corrupt: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkcorruptover-postgres-jvzjnz.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkcorruptover-postgres-jvzjnz created apply test-chaos-mesh-networkcorruptover-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-networkcorruptover-postgres-jvzjnz.yaml` networkcorruptover chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: No resources found in ns-bhsuh namespace. primary: postgres-jvzjnz-postgresql-0 postgres-jvzjnz-postgresql-1;secondary: check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-postgres-jvzjnz" force deleted networkchaos.chaos-mesh.org/test-chaos-mesh-networkcorruptover-postgres-jvzjnz patched check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover networkcorruptover Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkduplicate check cluster status before cluster-failover-networkduplicate check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-postgres-jvzjnz" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkduplicate-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 mode: all action: duplicate duplicate: duplicate: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkduplicate-postgres-jvzjnz.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkduplicate-postgres-jvzjnz created apply test-chaos-mesh-networkduplicate-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-networkduplicate-postgres-jvzjnz.yaml` networkduplicate chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-postgres-jvzjnz" force deleted networkchaos.chaos-mesh.org/test-chaos-mesh-networkduplicate-postgres-jvzjnz patched check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover networkduplicate Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover faultover check cluster status before cluster-failover-faultover check cluster status done cluster_status:Running `kubectl apply -f test-chaos-mesh-faultover-postgres-jvzjnz.yaml` error: the server doesn't have a resource type "" error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry error: no objects passed to apply apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml retry apply test-chaos-mesh-faultover-postgres-jvzjnz.yaml timeout `rm -rf test-chaos-mesh-faultover-postgres-jvzjnz.yaml` check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 checking failover... check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done failover pod name:postgres-jvzjnz-postgresql-1 checking failover... check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done failover pod name:postgres-jvzjnz-postgresql-1 checking failover... check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done failover pod name:postgres-jvzjnz-postgresql-1 checking failover... check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done failover pod name:postgres-jvzjnz-postgresql-1 checking failover... check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check failover pod name timeout check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update postgres-jvzjnz --termination-policy=WipeOut --namespace ns-bhsuh ` cluster.apps.kubeblocks.io/postgres-jvzjnz updated (no change) check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-jvzjnz ` `kubectl get backupschedule postgres-jvzjnz-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched cluster wal-g backup `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.name***"` `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.namespace***"` `kubectl get secrets kb-backuprepo-fc6xr -n kb-heauh -o jsonpath="***.data.accessKeyId***"` `kubectl get secrets kb-backuprepo-fc6xr -n kb-heauh -o jsonpath="***.data.secretAccessKey***"` KUBEBLOCKS NAMESPACE:kb-heauh get kubeblocks namespace done `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-heauh -o jsonpath="***.items[0].data.root-user***"` `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-heauh -o jsonpath="***.items[0].data.root-password***"` minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-heauh.svc.cluster.local:9000 list minio bucket kbcli-test `echo 'mc config host add minioserver http://kbcli-test-minio.kb-heauh.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-8f45f86b6-jvj74 --namespace kb-heauh -- bash` Unable to use a TTY - input is not a terminal or the right kind of file list minio bucket done default backuprepo:backuprepo-kbcli-test exists `kbcli cluster backup postgres-jvzjnz --method wal-g --namespace ns-bhsuh ` Backup backup-ns-bhsuh-postgres-jvzjnz-20250911182704 created successfully, you can view the progress: kbcli cluster list-backups --names=backup-ns-bhsuh-postgres-jvzjnz-20250911182704 -n ns-bhsuh check backup status `kbcli cluster list-backups postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION 6589b244-postgres-jvzjnz-postg-wal-g-archive ns-bhsuh postgres-jvzjnz wal-g-archive Running(AvailablePods: 0) Delete Sep 11,2025 18:27 UTC+0800 backup-ns-bhsuh-postgres-jvzjnz-20250911182704 ns-bhsuh postgres-jvzjnz wal-g Delete Sep 11,2025 18:27 UTC+0800 backup_status:postgres-jvzjnz-wal-g-Running backup_status:postgres-jvzjnz-wal-g-Running backup_status:postgres-jvzjnz-wal-g-Running backup_status:postgres-jvzjnz-wal-g-Running check backup status done backup_status:backup-ns-bhsuh-postgres-jvzjnz-20250911182704 ns-bhsuh postgres-jvzjnz wal-g Completed 9063296 18s Delete Sep 11,2025 18:27 UTC+0800 Sep 11,2025 18:27 UTC+0800 `create table if not exists msg(id SERIAL PRIMARY KEY, msg text, time timestamp);insert into msg (msg, time) values ('kbcli-test-data-jvzjnz0', now());` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file CREATE TABLE INSERT 0 1 `insert into msg (msg, time) values ('kbcli-test-data-jvzjnz1', now());` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file INSERT 0 1 Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file pg_switch_wal --------------- 0/11012C80 (1 row) `insert into msg (msg, time) values ('kbcli-test-data-jvzjnz2', now());` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file INSERT 0 1 Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file pg_switch_wal --------------- 0/12003CE8 (1 row) checking recoverable time 1 recoverable time:Sep 11,2025 18:27:26 UTC+0800 `insert into msg (msg, time) values ('kbcli-test-data-jvzjnz4', now());` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file INSERT 0 1 Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file pg_switch_wal --------------- 0/13000108 (1 row) check recoverable time 1 done recoverable time:Sep 11,2025 18:27:33 UTC+0800 cluster restore-to-time backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-jvzjnz-backup" not found `kbcli cluster restore postgres-jvzjnz-backup --backup 6589b244-postgres-jvzjnz-postg-wal-g-archive --restore-to-time "Sep 11,2025 18:27:33 UTC+0800" --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup created check cluster status `kbcli cluster list postgres-jvzjnz-backup --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz-backup ns-bhsuh postgresql WipeOut Creating Sep 11,2025 18:27 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz-backup --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-backup-postgresql-0 ns-bhsuh postgres-jvzjnz-backup postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:29 UTC+0800 postgres-jvzjnz-backup-postgresql-1 ns-bhsuh postgres-jvzjnz-backup postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:29 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-backup-postgresql-1;secondary: postgres-jvzjnz-backup-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-backup-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `select * from msg;` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file id | msg | time ----+-------------------------+---------------------------- 1 | kbcli-test-data-jvzjnz0 | 2025-09-11 10:27:25.770841 2 | kbcli-test-data-jvzjnz1 | 2025-09-11 10:27:26.92588 (2 rows) Point-In-Time Recovery Success delete cluster postgres-jvzjnz-backup `kbcli cluster delete postgres-jvzjnz-backup --auto-approve --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup deleted pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Running 0 84s postgres-jvzjnz-backup-postgresql-1 5/5 Running 0 84s pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Terminating 0 104s postgres-jvzjnz-backup-postgresql-1 5/5 Terminating 0 104s No resources found in ns-bhsuh namespace. delete cluster pod done No resources found in ns-bhsuh namespace. check cluster resource non-exist OK: pvc No resources found in ns-bhsuh namespace. delete cluster done No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-jvzjnz-backup" not found `kbcli cluster describe-backup --names backup-ns-bhsuh-postgres-jvzjnz-20250911182704 --namespace ns-bhsuh ` Name: backup-ns-bhsuh-postgres-jvzjnz-20250911182704 Cluster: postgres-jvzjnz Namespace: ns-bhsuh Spec: Method: wal-g Policy Name: postgres-jvzjnz-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-bhsuh-postgres-jvzjnz-20250911182704-7e93 TargetPodName: postgres-jvzjnz-postgresql-1 Phase: Completed Start Time: Sep 11,2025 18:27 UTC+0800 Completion Time: Sep 11,2025 18:27 UTC+0800 Extras: =================== 1 =================== walGBackupName: base_0000000D000000000000000F Status: Phase: Completed Total Size: 9063296 ActionSet Name: postgresql-wal-g Repository: backuprepo-kbcli-test Duration: 18s Start Time: Sep 11,2025 18:27 UTC+0800 Completion Time: Sep 11,2025 18:27 UTC+0800 Path: /ns-bhsuh/postgres-jvzjnz-6589b244-609c-46f9-989f-1a47d1567b9a/postgresql/backup-ns-bhsuh-postgres-jvzjnz-20250911182704 Time Range Start: Sep 11,2025 18:27 UTC+0800 Time Range End: Sep 11,2025 18:27 UTC+0800 Warning Events: `kbcli cluster restore postgres-jvzjnz-backup --backup backup-ns-bhsuh-postgres-jvzjnz-20250911182704 --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup created check cluster status `kbcli cluster list postgres-jvzjnz-backup --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz-backup ns-bhsuh postgresql WipeOut Creating Sep 11,2025 18:31 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz-backup --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-backup-postgresql-0 ns-bhsuh postgres-jvzjnz-backup postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:32 UTC+0800 postgres-jvzjnz-backup-postgresql-1 ns-bhsuh postgres-jvzjnz-backup postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:32 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-backup-postgresql-0;secondary: postgres-jvzjnz-backup-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kbcli cluster describe-backup --names backup-ns-bhsuh-postgres-jvzjnz-20250911182704 --namespace ns-bhsuh ` Name: backup-ns-bhsuh-postgres-jvzjnz-20250911182704 Cluster: postgres-jvzjnz Namespace: ns-bhsuh Spec: Method: wal-g Policy Name: postgres-jvzjnz-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-bhsuh-postgres-jvzjnz-20250911182704-7e93 TargetPodName: postgres-jvzjnz-postgresql-1 Phase: Completed Start Time: Sep 11,2025 18:27 UTC+0800 Completion Time: Sep 11,2025 18:27 UTC+0800 Extras: =================== 1 =================== walGBackupName: base_0000000D000000000000000F Status: Phase: Completed Total Size: 9063296 ActionSet Name: postgresql-wal-g Repository: backuprepo-kbcli-test Duration: 18s Start Time: Sep 11,2025 18:27 UTC+0800 Completion Time: Sep 11,2025 18:27 UTC+0800 Path: /ns-bhsuh/postgres-jvzjnz-6589b244-609c-46f9-989f-1a47d1567b9a/postgresql/backup-ns-bhsuh-postgres-jvzjnz-20250911182704 Time Range Start: Sep 11,2025 18:27 UTC+0800 Time Range End: Sep 11,2025 18:27 UTC+0800 Warning Events: `kubectl get secrets -l app.kubernetes.io/instance=postgres-jvzjnz` set secret: postgres-jvzjnz-postgresql-account-postgres `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-jvzjnz-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:NrMJ78303K;DB_PORT:5432;DB_DATABASE:postgres `echo 'DROP TABLE msg;' | kubectl exec -it postgres-jvzjnz-postgresql-1 -n default -- psql -U postgres ` Error from server (NotFound): pods "postgres-jvzjnz-postgresql-1" not found `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-jvzjnz ` `kubectl get backupschedule postgres-jvzjnz-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: extension "vector" already exists Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+------------------------------------------------------------------------ file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.6 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 4.0.1 | public | similar to SET ROLE but with added logging vector | 0.7.4 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file max_connections ----------------- 67 (1 row) connect cluster Success set max_connections to 67 delete cluster postgres-jvzjnz-backup `kbcli cluster delete postgres-jvzjnz-backup --auto-approve --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup deleted pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Running 0 61s postgres-jvzjnz-backup-postgresql-1 5/5 Running 0 61s pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Terminating 0 81s postgres-jvzjnz-backup-postgresql-1 5/5 Terminating 0 81s No resources found in ns-bhsuh namespace. delete cluster pod done No resources found in ns-bhsuh namespace. check cluster resource non-exist OK: pvc No resources found in ns-bhsuh namespace. delete cluster done No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups 6589b244-postgres-jvzjnz-postg-wal-g-archive --namespace ns-bhsuh ` backup.dataprotection.kubeblocks.io/6589b244-postgres-jvzjnz-postg-wal-g-archive patched `kbcli cluster delete-backup postgres-jvzjnz --name 6589b244-postgres-jvzjnz-postg-wal-g-archive --force --auto-approve --namespace ns-bhsuh ` Backup 6589b244-postgres-jvzjnz-postg-wal-g-archive deleted `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-bhsuh-postgres-jvzjnz-20250911182704 --namespace ns-bhsuh ` backup.dataprotection.kubeblocks.io/backup-ns-bhsuh-postgres-jvzjnz-20250911182704 patched `kbcli cluster delete-backup postgres-jvzjnz --name backup-ns-bhsuh-postgres-jvzjnz-20250911182704 --force --auto-approve --namespace ns-bhsuh ` Backup backup-ns-bhsuh-postgres-jvzjnz-20250911182704 deleted cmpv upgrade service version:2,12.14.0|2,12.14.1|2,12.15.0|2,12.22.0|2,14.7.2|2,14.8.0|2,14.18.0|2,15.7.0|2,15.13.0|2,16.4.0|2,16.9.0|2,17.5.0 set latest cmpv service version latest service version:16.9.0 cmpv service version upgrade and downgrade upgrade from:16.4.0 to service version:16.9.0 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-jvzjnz-upgrade-cmpv- namespace: ns-bhsuh spec: clusterName: postgres-jvzjnz upgrade: components: - componentName: postgresql serviceVersion: 16.9.0 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-jvzjnz.yaml` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-upgrade-cmpv-c8vb7 created create test_ops_cluster_postgres-jvzjnz.yaml Success `rm -rf test_ops_cluster_postgres-jvzjnz.yaml` check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-upgrade-cmpv-c8vb7 ns-bhsuh Upgrade postgres-jvzjnz Creating -/- Sep 11,2025 18:33 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-upgrade-cmpv-c8vb7 ns-bhsuh Upgrade postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 18:33 UTC+0800 check ops status done ops_status:postgres-jvzjnz-upgrade-cmpv-c8vb7 ns-bhsuh Upgrade postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 18:33 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-upgrade-cmpv-c8vb7 --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-upgrade-cmpv-c8vb7 patched `kbcli cluster delete-ops --name postgres-jvzjnz-upgrade-cmpv-c8vb7 --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-upgrade-cmpv-c8vb7 deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success downgrade from:16.9.0 to service version:16.4.0 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-jvzjnz-upgrade-cmpv- namespace: ns-bhsuh spec: clusterName: postgres-jvzjnz upgrade: components: - componentName: postgresql serviceVersion: 16.4.0 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-jvzjnz.yaml` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-upgrade-cmpv-qp2r2 created create test_ops_cluster_postgres-jvzjnz.yaml Success `rm -rf test_ops_cluster_postgres-jvzjnz.yaml` check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-upgrade-cmpv-qp2r2 ns-bhsuh Upgrade postgres-jvzjnz Creating -/- Sep 11,2025 18:35 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:11 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-upgrade-cmpv-qp2r2 ns-bhsuh Upgrade postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 18:35 UTC+0800 check ops status done ops_status:postgres-jvzjnz-upgrade-cmpv-qp2r2 ns-bhsuh Upgrade postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 18:35 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-upgrade-cmpv-qp2r2 --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-upgrade-cmpv-qp2r2 patched `kbcli cluster delete-ops --name postgres-jvzjnz-upgrade-cmpv-qp2r2 --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-upgrade-cmpv-qp2r2 deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart postgres-jvzjnz --auto-approve --force=true --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-restart-b9wn2 created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-restart-b9wn2 -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-restart-b9wn2 ns-bhsuh Restart postgres-jvzjnz postgresql Running -/- Sep 11,2025 18:37 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:37 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:39 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-restart-b9wn2 ns-bhsuh Restart postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 18:37 UTC+0800 check ops status done ops_status:postgres-jvzjnz-restart-b9wn2 ns-bhsuh Restart postgres-jvzjnz postgresql Succeed 2/2 Sep 11,2025 18:37 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-restart-b9wn2 --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-restart-b9wn2 patched `kbcli cluster delete-ops --name postgres-jvzjnz-restart-b9wn2 --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-restart-b9wn2 deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster does not need to check monitor currently check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:37 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:39 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-0;secondary: postgres-jvzjnz-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done test failover podkill check cluster status before cluster-failover-podkill check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podkill-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podkill-postgres-jvzjnz" not found Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podkill-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podkill-postgres-jvzjnz namespace: ns-bhsuh spec: selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-0 mode: all action: pod-kill `kubectl apply -f test-chaos-mesh-podkill-postgres-jvzjnz.yaml` podchaos.chaos-mesh.org/test-chaos-mesh-podkill-postgres-jvzjnz created apply test-chaos-mesh-podkill-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-podkill-postgres-jvzjnz.yaml` check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Updating Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:40 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:39 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podkill-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. podchaos.chaos-mesh.org "test-chaos-mesh-podkill-postgres-jvzjnz" force deleted podchaos.chaos-mesh.org/test-chaos-mesh-podkill-postgres-jvzjnz patched check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover podkill Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover attroverride check cluster status before cluster-failover-attroverride check cluster status done cluster_status:Running `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge IOChaos test-chaos-mesh-attroverride-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): iochaos.chaos-mesh.org "test-chaos-mesh-attroverride-postgres-jvzjnz" not found Error from server (NotFound): iochaos.chaos-mesh.org "test-chaos-mesh-attroverride-postgres-jvzjnz" not found apiVersion: chaos-mesh.org/v1alpha1 kind: IOChaos metadata: name: test-chaos-mesh-attroverride-postgres-jvzjnz namespace: ns-bhsuh spec: action: attrOverride mode: all selector: namespaces: - ns-bhsuh labelSelectors: apps.kubeblocks.io/pod-name: postgres-jvzjnz-postgresql-1 volumePath: /home/postgres/pgdata path: '/home/postgres/pgdata/**/*' attr: perm: 72 percent: 100 duration: 2m `kubectl apply -f test-chaos-mesh-attroverride-postgres-jvzjnz.yaml` iochaos.chaos-mesh.org/test-chaos-mesh-attroverride-postgres-jvzjnz created apply test-chaos-mesh-attroverride-postgres-jvzjnz.yaml Success `rm -rf test-chaos-mesh-attroverride-postgres-jvzjnz.yaml` attroverride chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:40 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:39 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge IOChaos test-chaos-mesh-attroverride-postgres-jvzjnz --namespace ns-bhsuh ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. iochaos.chaos-mesh.org "test-chaos-mesh-attroverride-postgres-jvzjnz" force deleted Error from server (NotFound): iochaos.chaos-mesh.org "test-chaos-mesh-attroverride-postgres-jvzjnz" not found check failover pod name failover pod name:postgres-jvzjnz-postgresql-1 failover attroverride Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update postgres-jvzjnz --termination-policy=WipeOut --namespace ns-bhsuh ` cluster.apps.kubeblocks.io/postgres-jvzjnz updated (no change) check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000000/10.224.0.6 Sep 11,2025 18:40 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:39 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check cluster storage class cluster volume-snapshot backup `kbcli cluster backup postgres-jvzjnz --method volume-snapshot --namespace ns-bhsuh ` Backup backup-ns-bhsuh-postgres-jvzjnz-20250911184321 created successfully, you can view the progress: kbcli cluster list-backups --names=backup-ns-bhsuh-postgres-jvzjnz-20250911184321 -n ns-bhsuh check backup status `kbcli cluster list-backups postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-bhsuh-postgres-jvzjnz-20250911184321 ns-bhsuh postgres-jvzjnz volume-snapshot Delete Sep 11,2025 18:43 UTC+0800 backup_status:postgres-jvzjnz-volume-snapshot-Running backup_status:postgres-jvzjnz-volume-snapshot-Running backup_status:postgres-jvzjnz-volume-snapshot-Running check backup status done backup_status:backup-ns-bhsuh-postgres-jvzjnz-20250911184321 ns-bhsuh postgres-jvzjnz volume-snapshot Completed 7Gi 13s Delete Sep 11,2025 18:43 UTC+0800 Sep 11,2025 18:43 UTC+0800 cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-jvzjnz-backup" not found `kbcli cluster describe-backup --names backup-ns-bhsuh-postgres-jvzjnz-20250911184321 --namespace ns-bhsuh ` Name: backup-ns-bhsuh-postgres-jvzjnz-20250911184321 Cluster: postgres-jvzjnz Namespace: ns-bhsuh Spec: Method: volume-snapshot Policy Name: postgres-jvzjnz-postgresql-backup-policy Actions: createVolumeSnapshot-0: panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x2b40d9f] goroutine 1 [running]: github.com/apecloud/kbcli/pkg/cmd/dataprotection.PrintBackupObjDescribe(0xc000a74840, 0xc000a92008) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:480 +0x4bf github.com/apecloud/kbcli/pkg/cmd/dataprotection.DescribeBackups(0xc000a74840, ***0xc00133f5d0?, 0x192f1db?, 0xc001386248?***) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:458 +0x125 github.com/apecloud/kbcli/pkg/cmd/cluster.describeBackups(0x0?, ***0xc00082d9c0?, 0x0?, 0xc49f486d00000000?***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:204 +0x66 github.com/apecloud/kbcli/pkg/cmd/cluster.NewDescribeBackupCmd.func1(0xc000ed7b08?, ***0xc00082d9c0, 0x0, 0x4***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:195 +0xe5 github.com/spf13/cobra.(*Command).execute(0xc000ed7b08, ***0xc00082d980, 0x4, 0x4***) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xa91 github.com/spf13/cobra.(*Command).ExecuteC(0xc000a1d208) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff github.com/spf13/cobra.(*Command).Execute(...) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041 k8s.io/component-base/cli.run(0xc000a1d208) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:146 +0x290 k8s.io/component-base/cli.RunNoErrOutput(...) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:84 main.main() /home/runner/work/kbcli/kbcli/cmd/cli/main.go:31 +0x18 `kbcli cluster restore postgres-jvzjnz-backup --backup backup-ns-bhsuh-postgres-jvzjnz-20250911184321 --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup created check cluster status `kbcli cluster list postgres-jvzjnz-backup --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz-backup ns-bhsuh postgresql WipeOut Sep 11,2025 18:43 UTC+0800 cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz-backup --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-backup-postgresql-0 ns-bhsuh postgres-jvzjnz-backup postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:43 UTC+0800 postgres-jvzjnz-backup-postgresql-1 ns-bhsuh postgres-jvzjnz-backup postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:43 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-backup-postgresql-0;secondary: postgres-jvzjnz-backup-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kbcli cluster describe-backup --names backup-ns-bhsuh-postgres-jvzjnz-20250911184321 --namespace ns-bhsuh ` Name: backup-ns-bhsuh-postgres-jvzjnz-20250911184321 Cluster: postgres-jvzjnz Namespace: ns-bhsuh Spec: Method: volume-snapshot Policy Name: postgres-jvzjnz-postgresql-backup-policy Actions: createVolumeSnapshot-0: panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x2b40d9f] goroutine 1 [running]: github.com/apecloud/kbcli/pkg/cmd/dataprotection.PrintBackupObjDescribe(0xc0011f8480, 0xc00050b8c8) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:480 +0x4bf github.com/apecloud/kbcli/pkg/cmd/dataprotection.DescribeBackups(0xc0011f8480, ***0xc0015bb980?, 0x192f1db?, 0xc001610488?***) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:458 +0x125 github.com/apecloud/kbcli/pkg/cmd/cluster.describeBackups(0x0?, ***0xc0007e16c0?, 0x0?, 0x3fe3fda00000000?***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:204 +0x66 github.com/apecloud/kbcli/pkg/cmd/cluster.NewDescribeBackupCmd.func1(0xc000f40008?, ***0xc0007e16c0, 0x0, 0x4***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:195 +0xe5 github.com/spf13/cobra.(*Command).execute(0xc000f40008, ***0xc0007e1680, 0x4, 0x4***) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xa91 github.com/spf13/cobra.(*Command).ExecuteC(0xc0005ed508) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff github.com/spf13/cobra.(*Command).Execute(...) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041 k8s.io/component-base/cli.run(0xc0005ed508) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:146 +0x290 k8s.io/component-base/cli.RunNoErrOutput(...) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:84 main.main() /home/runner/work/kbcli/kbcli/cmd/cli/main.go:31 +0x18 cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: extension "vector" already exists Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+------------------------------------------------------------------------ file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.6 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 4.1.0 | public | similar to SET ROLE but with added logging vector | 0.7.4 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file max_connections ----------------- 200 (1 row) connect cluster Success set max_connections to 200 delete cluster postgres-jvzjnz-backup `kbcli cluster delete postgres-jvzjnz-backup --auto-approve --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup deleted pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Running 0 63s postgres-jvzjnz-backup-postgresql-1 5/5 Running 0 63s pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Terminating 0 83s postgres-jvzjnz-backup-postgresql-1 5/5 Terminating 0 83s No resources found in ns-bhsuh namespace. delete cluster pod done No resources found in ns-bhsuh namespace. check cluster resource non-exist OK: pvc No resources found in ns-bhsuh namespace. delete cluster done No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-bhsuh-postgres-jvzjnz-20250911184321 --namespace ns-bhsuh ` backup.dataprotection.kubeblocks.io/backup-ns-bhsuh-postgres-jvzjnz-20250911184321 patched `kbcli cluster delete-backup postgres-jvzjnz --name backup-ns-bhsuh-postgres-jvzjnz-20250911184321 --force --auto-approve --namespace ns-bhsuh ` Backup backup-ns-bhsuh-postgres-jvzjnz-20250911184321 deleted cluster pg-basebackup backup `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.name***"` `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.namespace***"` `kubectl get secrets kb-backuprepo-fc6xr -n kb-heauh -o jsonpath="***.data.accessKeyId***"` `kubectl get secrets kb-backuprepo-fc6xr -n kb-heauh -o jsonpath="***.data.secretAccessKey***"` KUBEBLOCKS NAMESPACE:kb-heauh get kubeblocks namespace done `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-heauh -o jsonpath="***.items[0].data.root-user***"` `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-heauh -o jsonpath="***.items[0].data.root-password***"` minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-heauh.svc.cluster.local:9000 list minio bucket kbcli-test `echo 'mc config host add minioserver http://kbcli-test-minio.kb-heauh.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-8f45f86b6-jvj74 --namespace kb-heauh -- bash` Unable to use a TTY - input is not a terminal or the right kind of file list minio bucket done default backuprepo:backuprepo-kbcli-test exists `kbcli cluster backup postgres-jvzjnz --method pg-basebackup --namespace ns-bhsuh ` Backup backup-ns-bhsuh-postgres-jvzjnz-20250911184529 created successfully, you can view the progress: kbcli cluster list-backups --names=backup-ns-bhsuh-postgres-jvzjnz-20250911184529 -n ns-bhsuh check backup status `kbcli cluster list-backups postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-bhsuh-postgres-jvzjnz-20250911184529 ns-bhsuh postgres-jvzjnz pg-basebackup Delete Sep 11,2025 18:45 UTC+0800 backup_status:postgres-jvzjnz-pg-basebackup-Running backup_status:postgres-jvzjnz-pg-basebackup-Running backup_status:postgres-jvzjnz-pg-basebackup-Running check backup status done backup_status:backup-ns-bhsuh-postgres-jvzjnz-20250911184529 ns-bhsuh postgres-jvzjnz pg-basebackup Completed 11713105 10s Delete Sep 11,2025 18:45 UTC+0800 Sep 11,2025 18:45 UTC+0800 cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-jvzjnz-backup" not found `kbcli cluster describe-backup --names backup-ns-bhsuh-postgres-jvzjnz-20250911184529 --namespace ns-bhsuh ` Name: backup-ns-bhsuh-postgres-jvzjnz-20250911184529 Cluster: postgres-jvzjnz Namespace: ns-bhsuh Spec: Method: pg-basebackup Policy Name: postgres-jvzjnz-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-bhsuh-postgres-jvzjnz-20250911184529-aac1 TargetPodName: postgres-jvzjnz-postgresql-0 Phase: Completed Start Time: Sep 11,2025 18:45 UTC+0800 Completion Time: Sep 11,2025 18:45 UTC+0800 Status: Phase: Completed Total Size: 11713105 ActionSet Name: postgresql-basebackup Repository: backuprepo-kbcli-test Duration: 10s Start Time: Sep 11,2025 18:45 UTC+0800 Completion Time: Sep 11,2025 18:45 UTC+0800 Path: /ns-bhsuh/postgres-jvzjnz-6589b244-609c-46f9-989f-1a47d1567b9a/postgresql/backup-ns-bhsuh-postgres-jvzjnz-20250911184529 Time Range Start: Sep 11,2025 18:45 UTC+0800 Time Range End: Sep 11,2025 18:45 UTC+0800 Warning Events: `kbcli cluster restore postgres-jvzjnz-backup --backup backup-ns-bhsuh-postgres-jvzjnz-20250911184529 --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup created check cluster status `kbcli cluster list postgres-jvzjnz-backup --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz-backup ns-bhsuh postgresql WipeOut Creating Sep 11,2025 18:45 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz-backup --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-backup-postgresql-0 ns-bhsuh postgres-jvzjnz-backup postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:46 UTC+0800 postgres-jvzjnz-backup-postgresql-1 ns-bhsuh postgres-jvzjnz-backup postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:46 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-backup-postgresql-1;secondary: postgres-jvzjnz-backup-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-backup-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kbcli cluster describe-backup --names backup-ns-bhsuh-postgres-jvzjnz-20250911184529 --namespace ns-bhsuh ` Name: backup-ns-bhsuh-postgres-jvzjnz-20250911184529 Cluster: postgres-jvzjnz Namespace: ns-bhsuh Spec: Method: pg-basebackup Policy Name: postgres-jvzjnz-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-bhsuh-postgres-jvzjnz-20250911184529-aac1 TargetPodName: postgres-jvzjnz-postgresql-0 Phase: Completed Start Time: Sep 11,2025 18:45 UTC+0800 Completion Time: Sep 11,2025 18:45 UTC+0800 Status: Phase: Completed Total Size: 11713105 ActionSet Name: postgresql-basebackup Repository: backuprepo-kbcli-test Duration: 10s Start Time: Sep 11,2025 18:45 UTC+0800 Completion Time: Sep 11,2025 18:45 UTC+0800 Path: /ns-bhsuh/postgres-jvzjnz-6589b244-609c-46f9-989f-1a47d1567b9a/postgresql/backup-ns-bhsuh-postgres-jvzjnz-20250911184529 Time Range Start: Sep 11,2025 18:45 UTC+0800 Time Range End: Sep 11,2025 18:45 UTC+0800 Warning Events: cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-jvzjnz-backup-postgresql-1 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: extension "vector" already exists Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+------------------------------------------------------------------------ file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.6 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 4.1.0 | public | similar to SET ROLE but with added logging vector | 0.7.4 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | kubectl exec -it postgres-jvzjnz-backup-postgresql-1 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file max_connections ----------------- 200 (1 row) connect cluster Success set max_connections to 200 delete cluster postgres-jvzjnz-backup `kbcli cluster delete postgres-jvzjnz-backup --auto-approve --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup deleted pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Running 0 36s postgres-jvzjnz-backup-postgresql-1 5/5 Running 0 36s pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Terminating 0 56s postgres-jvzjnz-backup-postgresql-1 5/5 Terminating 0 56s No resources found in ns-bhsuh namespace. delete cluster pod done No resources found in ns-bhsuh namespace. check cluster resource non-exist OK: pvc No resources found in ns-bhsuh namespace. delete cluster done No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. cluster rebulid instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-jvzjnz-rebuildinstance- namespace: ns-bhsuh spec: type: RebuildInstance clusterName: postgres-jvzjnz force: true rebuildFrom: - componentName: postgresql instances: - name: postgres-jvzjnz-postgresql-0 backupName: backup-ns-bhsuh-postgres-jvzjnz-20250911184529 inPlace: true check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-jvzjnz.yaml` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-rebuildinstance-dt2wx created create test_ops_cluster_postgres-jvzjnz.yaml Success `rm -rf test_ops_cluster_postgres-jvzjnz.yaml` check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz Creating -/- Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz Running -/- Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:47 UTC+0800 check ops status done ops_status:postgres-jvzjnz-rebuildinstance-dt2wx ns-bhsuh RebuildInstance postgres-jvzjnz postgresql Succeed 1/1 Sep 11,2025 18:47 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-rebuildinstance-dt2wx --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-rebuildinstance-dt2wx patched `kbcli cluster delete-ops --name postgres-jvzjnz-rebuildinstance-dt2wx --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-rebuildinstance-dt2wx deleted check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:47 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:39 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-bhsuh-postgres-jvzjnz-20250911184529 --namespace ns-bhsuh ` backup.dataprotection.kubeblocks.io/backup-ns-bhsuh-postgres-jvzjnz-20250911184529 patched `kbcli cluster delete-backup postgres-jvzjnz --name backup-ns-bhsuh-postgres-jvzjnz-20250911184529 --force --auto-approve --namespace ns-bhsuh ` Backup backup-ns-bhsuh-postgres-jvzjnz-20250911184529 deleted `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-jvzjnz ` `kubectl get backupschedule postgres-jvzjnz-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched check backup status `kbcli cluster list-backups postgres-jvzjnz --namespace ns-bhsuh ` No backups found in ns-bhsuh namespace. No backups found in ns-bhsuh namespace. backup_status:-- No backups found in ns-bhsuh namespace. backup_status:-- No backups found in ns-bhsuh namespace. backup_status:-- No backups found in ns-bhsuh namespace. backup_status:-- No backups found in ns-bhsuh namespace. backup_status:-- No backups found in ns-bhsuh namespace. backup_status:-- `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-jvzjnz ` `kubectl get backupschedule postgres-jvzjnz-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backup_status:postgres-jvzjnz-pg-basebackup-Running backup_status:postgres-jvzjnz-pg-basebackup-Running check backup status done backup_status:postgres-jvzjnz-pg-basebackup-20250911105002 ns-bhsuh postgres-jvzjnz pg-basebackup Completed 11689864 10s Delete Sep 11,2025 18:50 UTC+0800 Sep 11,2025 18:50 UTC+0800 Sep 18,2025 18:50 UTC+0800 `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-jvzjnz ` `kubectl get backupschedule postgres-jvzjnz-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-jvzjnz-postgresql-backup-schedule patched (no change) cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-jvzjnz-backup" not found `kbcli cluster describe-backup --names postgres-jvzjnz-pg-basebackup-20250911105002 --namespace ns-bhsuh ` Name: postgres-jvzjnz-pg-basebackup-20250911105002 Cluster: postgres-jvzjnz Namespace: ns-bhsuh Spec: Method: pg-basebackup Policy Name: postgres-jvzjnz-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-postgres-jvzjnz-pg-basebackup-20250911105002-cb231f TargetPodName: postgres-jvzjnz-postgresql-0 Phase: Completed Start Time: Sep 11,2025 18:50 UTC+0800 Completion Time: Sep 11,2025 18:50 UTC+0800 Status: Phase: Completed Total Size: 11689864 ActionSet Name: postgresql-basebackup Repository: backuprepo-kbcli-test Duration: 10s Expiration Time: Sep 18,2025 18:50 UTC+0800 Start Time: Sep 11,2025 18:50 UTC+0800 Completion Time: Sep 11,2025 18:50 UTC+0800 Path: /ns-bhsuh/postgres-jvzjnz-6589b244-609c-46f9-989f-1a47d1567b9a/postgresql/postgres-jvzjnz-pg-basebackup-20250911105002 Time Range Start: Sep 11,2025 18:50 UTC+0800 Time Range End: Sep 11,2025 18:50 UTC+0800 Warning Events: `kbcli cluster restore postgres-jvzjnz-backup --backup postgres-jvzjnz-pg-basebackup-20250911105002 --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup created check cluster status `kbcli cluster list postgres-jvzjnz-backup --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz-backup ns-bhsuh postgresql WipeOut Creating Sep 11,2025 18:50 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz-backup --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-backup-postgresql-0 ns-bhsuh postgres-jvzjnz-backup postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:50 UTC+0800 postgres-jvzjnz-backup-postgresql-1 ns-bhsuh postgres-jvzjnz-backup postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:50 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-backup-postgresql-0;secondary: postgres-jvzjnz-backup-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done `kbcli cluster describe-backup --names postgres-jvzjnz-pg-basebackup-20250911105002 --namespace ns-bhsuh ` Name: postgres-jvzjnz-pg-basebackup-20250911105002 Cluster: postgres-jvzjnz Namespace: ns-bhsuh Spec: Method: pg-basebackup Policy Name: postgres-jvzjnz-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-postgres-jvzjnz-pg-basebackup-20250911105002-cb231f TargetPodName: postgres-jvzjnz-postgresql-0 Phase: Completed Start Time: Sep 11,2025 18:50 UTC+0800 Completion Time: Sep 11,2025 18:50 UTC+0800 Status: Phase: Completed Total Size: 11689864 ActionSet Name: postgresql-basebackup Repository: backuprepo-kbcli-test Duration: 10s Expiration Time: Sep 18,2025 18:50 UTC+0800 Start Time: Sep 11,2025 18:50 UTC+0800 Completion Time: Sep 11,2025 18:50 UTC+0800 Path: /ns-bhsuh/postgres-jvzjnz-6589b244-609c-46f9-989f-1a47d1567b9a/postgresql/postgres-jvzjnz-pg-basebackup-20250911105002 Time Range Start: Sep 11,2025 18:50 UTC+0800 Time Range End: Sep 11,2025 18:50 UTC+0800 Warning Events: cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: extension "vector" already exists Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+------------------------------------------------------------------------ file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.6 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 4.1.0 | public | similar to SET ROLE but with added logging vector | 0.7.4 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | kubectl exec -it postgres-jvzjnz-backup-postgresql-0 --namespace ns-bhsuh -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file max_connections ----------------- 200 (1 row) connect cluster Success set max_connections to 200 delete cluster postgres-jvzjnz-backup `kbcli cluster delete postgres-jvzjnz-backup --auto-approve --namespace ns-bhsuh ` Cluster postgres-jvzjnz-backup deleted pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Running 0 68s postgres-jvzjnz-backup-postgresql-1 5/5 Running 0 68s pod_info:postgres-jvzjnz-backup-postgresql-0 5/5 Terminating 0 88s postgres-jvzjnz-backup-postgresql-1 5/5 Terminating 0 88s No resources found in ns-bhsuh namespace. delete cluster pod done No resources found in ns-bhsuh namespace. check cluster resource non-exist OK: pvc No resources found in ns-bhsuh namespace. delete cluster done No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups postgres-jvzjnz-pg-basebackup-20250911105002 --namespace ns-bhsuh ` backup.dataprotection.kubeblocks.io/postgres-jvzjnz-pg-basebackup-20250911105002 patched `kbcli cluster delete-backup postgres-jvzjnz --name postgres-jvzjnz-pg-basebackup-20250911105002 --force --auto-approve --namespace ns-bhsuh ` Backup postgres-jvzjnz-pg-basebackup-20250911105002 deleted cluster list-logs `kbcli cluster list-logs postgres-jvzjnz --namespace ns-bhsuh ` No log files found. Error from server (NotFound): pods "postgres-jvzjnz-postgresql-1" not found cluster logs `kbcli cluster logs postgres-jvzjnz --tail 30 --namespace ns-bhsuh ` Defaulted container "postgresql" out of: postgresql, pgbouncer, dbctl, kbagent, config-manager, pg-init-container (init), init-kbagent (init), kbagent-worker (init) 2025-09-11 10:49:13,054 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:49:15.973 UTC [30] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-09-11 10:49:23,041 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:49:33,043 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:49:43,043 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:49:46.002 UTC [30] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-09-11 10:49:53,043 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:50:03,047 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:50:13,048 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:50:16.003 UTC [30] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-09-11 10:50:23,042 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:50:33,046 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:50:43,042 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:50:46.031 UTC [30] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-09-11 10:50:53,044 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:51:03,044 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:51:13,044 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:51:16.029 UTC [30] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-09-11 10:51:23,126 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:51:33,043 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:51:43,046 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:51:46.057 UTC [30] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-09-11 10:51:53,043 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:52:03,050 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:52:13,044 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:52:16.061 UTC [30] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-09-11 10:52:23,042 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:52:33,047 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:52:43,042 INFO: no action. I am (postgres-jvzjnz-postgresql-1), the leader with the lock 2025-09-11 10:52:46.085 UTC [30] LOG ***ticks: 0, maint: 0, retry: 0*** cluster logs running `kbcli cluster logs postgres-jvzjnz --tail 30 --file-type=running --namespace ns-bhsuh ` ==> /home/postgres/pgdata/pgroot/data/log/postgresql-2025-09-11.csv <== 2025-09-11 10:52:27.383 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,946,"SELECT",2025-09-11 10:39:21 GMT,2/949,0,LOG,00000,"AUDIT: SESSION,946,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:28.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,947,"SELECT",2025-09-11 10:39:21 GMT,2/950,0,LOG,00000,"AUDIT: SESSION,947,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:29.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,948,"SELECT",2025-09-11 10:39:21 GMT,2/951,0,LOG,00000,"AUDIT: SESSION,948,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:30.385 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,949,"SELECT",2025-09-11 10:39:21 GMT,2/952,0,LOG,00000,"AUDIT: SESSION,949,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:30.841 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,950,"SELECT",2025-09-11 10:39:21 GMT,2/953,0,LOG,00000,"AUDIT: SESSION,950,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:31.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,951,"SELECT",2025-09-11 10:39:21 GMT,2/954,0,LOG,00000,"AUDIT: SESSION,951,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:32.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,952,"SELECT",2025-09-11 10:39:21 GMT,2/955,0,LOG,00000,"AUDIT: SESSION,952,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:33.027 GMT,"postgres","postgres",78,"[local]",68c2a6d9.4e,88,"SELECT",2025-09-11 10:39:21 GMT,3/90,0,LOG,00000,"AUDIT: SESSION,88,1,READ,SELECT,,,""SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()"",",,,,,,,,,"Patroni heartbeat","client backend",,-5237322345073106040 2025-09-11 10:52:33.383 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,953,"SELECT",2025-09-11 10:39:21 GMT,2/956,0,LOG,00000,"AUDIT: SESSION,953,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:34.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,954,"SELECT",2025-09-11 10:39:21 GMT,2/957,0,LOG,00000,"AUDIT: SESSION,954,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:35.383 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,955,"SELECT",2025-09-11 10:39:21 GMT,2/958,0,LOG,00000,"AUDIT: SESSION,955,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:35.853 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,956,"SELECT",2025-09-11 10:39:21 GMT,2/959,0,LOG,00000,"AUDIT: SESSION,956,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:36.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,957,"SELECT",2025-09-11 10:39:21 GMT,2/960,0,LOG,00000,"AUDIT: SESSION,957,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:37.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,958,"SELECT",2025-09-11 10:39:21 GMT,2/961,0,LOG,00000,"AUDIT: SESSION,958,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:38.383 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,959,"SELECT",2025-09-11 10:39:21 GMT,2/962,0,LOG,00000,"AUDIT: SESSION,959,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:39.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,960,"SELECT",2025-09-11 10:39:21 GMT,2/963,0,LOG,00000,"AUDIT: SESSION,960,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:40.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,961,"SELECT",2025-09-11 10:39:21 GMT,2/964,0,LOG,00000,"AUDIT: SESSION,961,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:40.866 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,962,"SELECT",2025-09-11 10:39:21 GMT,2/965,0,LOG,00000,"AUDIT: SESSION,962,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:41.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,963,"SELECT",2025-09-11 10:39:21 GMT,2/966,0,LOG,00000,"AUDIT: SESSION,963,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:42.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,964,"SELECT",2025-09-11 10:39:21 GMT,2/967,0,LOG,00000,"AUDIT: SESSION,964,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:43.027 GMT,"postgres","postgres",78,"[local]",68c2a6d9.4e,89,"SELECT",2025-09-11 10:39:21 GMT,3/91,0,LOG,00000,"AUDIT: SESSION,89,1,READ,SELECT,,,""SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, status, pg_catalog.current_setting('restore_command'), NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()"",",,,,,,,,,"Patroni heartbeat","client backend",,-5237322345073106040 2025-09-11 10:52:43.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,965,"SELECT",2025-09-11 10:39:21 GMT,2/968,0,LOG,00000,"AUDIT: SESSION,965,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:44.386 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,966,"SELECT",2025-09-11 10:39:21 GMT,2/969,0,LOG,00000,"AUDIT: SESSION,966,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:45.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,967,"SELECT",2025-09-11 10:39:21 GMT,2/970,0,LOG,00000,"AUDIT: SESSION,967,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:45.882 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,968,"SELECT",2025-09-11 10:39:21 GMT,2/971,0,LOG,00000,"AUDIT: SESSION,968,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:46.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,969,"SELECT",2025-09-11 10:39:21 GMT,2/972,0,LOG,00000,"AUDIT: SESSION,969,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:47.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,970,"SELECT",2025-09-11 10:39:21 GMT,2/973,0,LOG,00000,"AUDIT: SESSION,970,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:48.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,971,"SELECT",2025-09-11 10:39:21 GMT,2/974,0,LOG,00000,"AUDIT: SESSION,971,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:49.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,972,"SELECT",2025-09-11 10:39:21 GMT,2/975,0,LOG,00000,"AUDIT: SESSION,972,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 2025-09-11 10:52:50.384 GMT,"postgres","postgres",79,"[local]",68c2a6d9.4f,973,"SELECT",2025-09-11 10:39:21 GMT,2/976,0,LOG,00000,"AUDIT: SESSION,973,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_flush_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), (pg_catalog.pg_stat_get_wal_receiver()).status, pg_catalog.current_setting('restore_command'), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni restapi","client backend",,3489562727129942967 ==> /home/postgres/pgdata/pgroot/data/log/postgresql-2025-09-11.log <== 2025-09-11 10:14:52 GMT [235]: [7-1] 68c2a11c.eb 0 HINT: Future log output will go to log destination "csvlog". INFO: 2025/09/11 10:27:11.333370 Files will be uploaded to storage: default 2025/09/11 10:27:11 NOTICE: S3 bucket kbcli-test: Streaming uploads using chunk size 50Mi will have maximum file size of 488.281Gi INFO: 2025/09/11 10:27:12.163553 FILE PATH: 0000000D000000000000000E.zst INFO: 2025/09/11 10:27:15.333658 Files will be uploaded to storage: default INFO: 2025/09/11 10:27:15.827446 FILE PATH: 0000000D000000000000000F.00000028.backup.zst ERROR: 2025/09/11 10:27:15.827508 Error marking wal file 0000000D000000000000000F.00000028.backup as uploaded: open /home/postgres/pgdata/pgroot/data/pg_wal/walg_data/walg_archive_status/0000000D000000000000000F.00000028: permission denied INFO: 2025/09/11 10:27:16.036610 FILE PATH: 0000000D000000000000000F.zst INFO: 2025/09/11 10:27:16.728093 Files will be uploaded to storage: default INFO: 2025/09/11 10:27:17.027494 FILE PATH: 0000000D000000000000000F.00000028.backup.zst INFO: 2025/09/11 10:27:17.152834 FILE PATH: 0000000D0000000000000010.zst ERROR: 2025/09/11 10:27:17.152883 Error marking wal file 0000000D0000000000000010 as uploaded: open /home/postgres/pgdata/pgroot/data/pg_wal/walg_data/walg_archive_status/0000000D0000000000000010: permission denied INFO: 2025/09/11 10:27:17.532397 Files will be uploaded to storage: default INFO: 2025/09/11 10:27:17.740895 FILE PATH: 0000000D0000000000000010.zst INFO: 2025/09/11 10:27:32.935405 Files will be uploaded to storage: default INFO: 2025/09/11 10:27:33.436220 FILE PATH: 0000000D0000000000000011.zst INFO: 2025/09/11 10:27:39.942222 Files will be uploaded to storage: default INFO: 2025/09/11 10:27:40.242268 FILE PATH: 0000000D0000000000000012.zst INFO: 2025/09/11 10:27:47.541005 Files will be uploaded to storage: default INFO: 2025/09/11 10:27:47.945297 FILE PATH: 0000000D0000000000000013.zst 2025-09-11 10:35:04 GMT [3680]: [6-1] 68c2a5d7.e60 0 LOG: ending log output to stderr 2025-09-11 10:35:04 GMT [3680]: [7-1] 68c2a5d7.e60 0 HINT: Future log output will go to log destination "csvlog". 2025-09-11 10:35:34 GMT [63]: [6-1] 68c2a5f6.3f 0 LOG: ending log output to stderr 2025-09-11 10:35:34 GMT [63]: [7-1] 68c2a5f6.3f 0 HINT: Future log output will go to log destination "csvlog". 2025-09-11 10:36:03 GMT [63]: [7-1] 68c2a612.3f 0 LOG: ending log output to stderr 2025-09-11 10:36:03 GMT [63]: [8-1] 68c2a612.3f 0 HINT: Future log output will go to log destination "csvlog". 2025-09-11 10:38:33 GMT [555]: [6-1] 68c2a6a8.22b 0 LOG: ending log output to stderr 2025-09-11 10:38:33 GMT [555]: [7-1] 68c2a6a8.22b 0 HINT: Future log output will go to log destination "csvlog". 2025-09-11 10:39:20 GMT [62]: [6-1] 68c2a6d8.3e 0 LOG: ending log output to stderr 2025-09-11 10:39:20 GMT [62]: [7-1] 68c2a6d8.3e 0 HINT: Future log output will go to log destination "csvlog". LB_TYPE is set to: intranet cluster expose check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster expose postgres-jvzjnz --auto-approve --force=true --type intranet --enable false --components postgresql --role-selector primary --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-expose-lsg75 created successfully, you can view the progress: kbcli cluster describe-ops postgres-jvzjnz-expose-lsg75 -n ns-bhsuh check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-expose-lsg75 ns-bhsuh Expose postgres-jvzjnz Creating -/- Sep 11,2025 18:52 UTC+0800 check cluster status `kbcli cluster list postgres-jvzjnz --show-labels --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-jvzjnz ns-bhsuh postgresql WipeOut Running Sep 11,2025 17:21 UTC+0800 app.kubernetes.io/instance=postgres-jvzjnz,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-jvzjnz --namespace ns-bhsuh ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-jvzjnz-postgresql-0 ns-bhsuh postgres-jvzjnz postgresql Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000001/10.224.0.7 Sep 11,2025 18:47 UTC+0800 postgres-jvzjnz-postgresql-1 ns-bhsuh postgres-jvzjnz postgresql Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-40497330-vmss000002/10.224.0.5 Sep 11,2025 18:39 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-jvzjnz-postgresql-1;secondary: postgres-jvzjnz-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-jvzjnz-postgresql-1 --namespace ns-bhsuh -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-jvzjnz --status all --namespace ns-bhsuh ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-jvzjnz-postgresql-backup-schedule-8b7dg ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:27 UTC+0800 postgres-jvzjnz-postgresql-backup-schedule-tntn5 ns-bhsuh Reconfiguring postgres-jvzjnz postgresql,postgresql Succeed -/- Sep 11,2025 18:33 UTC+0800 postgres-jvzjnz-expose-lsg75 ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:52 UTC+0800 ops_status:postgres-jvzjnz-expose-lsg75 ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:52 UTC+0800 ops_status:postgres-jvzjnz-expose-lsg75 ns-bhsuh Expose postgres-jvzjnz postgresql Running 0/1 Sep 11,2025 18:52 UTC+0800 check ops status done ops_status:postgres-jvzjnz-expose-lsg75 ns-bhsuh Expose postgres-jvzjnz postgresql Succeed 1/1 Sep 11,2025 18:52 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-jvzjnz-expose-lsg75 --namespace ns-bhsuh ` opsrequest.operations.kubeblocks.io/postgres-jvzjnz-expose-lsg75 patched `kbcli cluster delete-ops --name postgres-jvzjnz-expose-lsg75 --force --auto-approve --namespace ns-bhsuh ` OpsRequest postgres-jvzjnz-expose-lsg75 deleted check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-1 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-jvzjnz-postgresql-0 -n ns-bhsuh -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success delete cluster postgres-jvzjnz `kbcli cluster delete postgres-jvzjnz --auto-approve --namespace ns-bhsuh ` Cluster postgres-jvzjnz deleted pod_info:postgres-jvzjnz-postgresql-0 5/5 Running 0 5m17s postgres-jvzjnz-postgresql-1 5/5 Running 0 14m pod_info:postgres-jvzjnz-postgresql-0 5/5 Terminating 0 5m37s postgres-jvzjnz-postgresql-1 5/5 Terminating 0 14m No resources found in ns-bhsuh namespace. delete cluster pod done No resources found in ns-bhsuh namespace. check cluster resource non-exist OK: pvc No resources found in ns-bhsuh namespace. delete cluster done No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. No resources found in ns-bhsuh namespace. Postgresql Test Suite All Done! Test Engine: postgresql Test Type: 2 --------------------------------------Postgresql (Topology = replication Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[ComponentDefinition=postgresql-16-1.0.1;ComponentVersion=postgresql;ServiceVersion=16.4.0;]|[Description=Create a cluster with the specified component definition postgresql-16-1.0.1 and component version postgresql and service version 16.4.0] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Expose]|[Enable=true;TYPE=intranet;ComponentName=postgresql]|[Description=Expose Enable the intranet service with postgresql component] [PASSED]|[Failover]|[HA=OOM;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience OOM either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Memory load.] [PASSED]|[No-Failover]|[HA=Full CPU;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience CPU full either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high CPU load.] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[VolumeExpansion]|[ComponentName=postgresql]|[Description=VolumeExpansion the cluster specify component postgresql] [PASSED]|[Failover]|[HA=Network Bandwidth;Durations=2m;ComponentName=postgresql]|[Description=Simulates network bandwidth fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to bandwidth network.] [PASSED]|[SwitchOver]|[ComponentName=postgresql]|[Description=SwitchOver the cluster specify component postgresql] [PASSED]|[Bench]|[ComponentName=postgresql]|[Description=Bench the cluster service with postgresql component] [PASSED]|[Bench]|[HostType=LB;ComponentName=postgresql]|[Description=Bench the cluster LB service with postgresql component] [PASSED]|[No-Failover]|[HA=Network Delay;Durations=2m;ComponentName=postgresql]|[Description=Simulates network delay fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to delay network.] [PASSED]|[Failover]|[HA=Network Loss;Durations=2m;ComponentName=postgresql]|[Description=Simulates network loss fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to loss network.] [PASSED]|[Failover]|[HA=Kill 1;ComponentName=postgresql]|[Description=Simulates conditions where process 1 killed either due to expected/undesired processes thereby testing the application's resilience to unavailability of some replicas due to abnormal termination signals.] [PASSED]|[Reconfiguring]|[ComponentName=postgresql;shared_buffers=512MB]|[Description=Reconfiguring the cluster specify component postgresql set shared_buffers=512MB] [PASSED]|[No-Failover]|[HA=Network Partition;Durations=2m;ComponentName=postgresql]|[Description=Simulates network partition fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to partition network.] [PASSED]|[Failover]|[HA=Pod Failure;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience failure for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to failure.] [PASSED]|[No-Failover]|[HA=Connection Stress;ComponentName=postgresql]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[No-Failover]|[HA=DNS Error;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience DNS service errors for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to DNS service errors.] [PASSED]|[HorizontalScaling Out]|[ComponentName=postgresql]|[Description=HorizontalScaling Out the cluster specify component postgresql] [PASSED]|[HorizontalScaling In]|[ComponentName=postgresql]|[Description=HorizontalScaling In the cluster specify component postgresql] [PASSED]|[VerticalScaling]|[ComponentName=postgresql]|[Description=VerticalScaling the cluster specify component postgresql] [PASSED]|[Reconfiguring]|[ComponentName=postgresql;max_connections=200]|[Description=Reconfiguring the cluster specify component postgresql set max_connections=200] [PASSED]|[No-Failover]|[HA=DNS Random;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience random IP addresses being returned by the DNS service for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to the DNS service returning random IP addresses.] [PASSED]|[No-Failover]|[HA=IOChaos Mistake;Durations=2m;]|[Description=Simulates conditions where pods experience IO mistake faults either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to IO mistake faults.] [PASSED]|[No-Failover]|[HA=Time Offset;Durations=2m;ComponentName=postgresql]|[Description=Simulates a time offset scenario thereby testing the application's resilience to potential slowness/unavailability of some replicas due to time offset.] [PASSED]|[Failover]|[HA=Network Corrupt;Durations=2m;ComponentName=postgresql]|[Description=Simulates network corrupt fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to corrupt network.] [PASSED]|[No-Failover]|[HA=Network Duplicate;Durations=2m;ComponentName=postgresql]|[Description=Simulates network duplicate fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to duplicate network.] [WARNING]|[CheckFailover]|[FailoverType=faultover]|[Description=-] [PASSED]|[Failover]|[HA=IOChaos Fault;ErrNO=;ErrMessage=;Durations=2m;]|[Description=Simulates conditions where pods experience IO faults either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to IO faults.] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Backup]|[BackupMethod=wal-g]|[Description=The cluster wal-g Backup] [PASSED]|[Restore To Time]|[BackupMethod=wal-g]|[Description=The cluster wal-g Restore To Time] [PASSED]|[Delete Restore Cluster]|[BackupMethod=wal-g]|[Description=Delete the wal-g restore cluster] [PASSED]|[Restore]|[BackupMethod=wal-g]|[Description=The cluster wal-g Restore] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=wal-g]|[Description=Delete the wal-g restore cluster] [PASSED]|[Upgrade]|[ComponentName=postgresql;ComponentVersionFrom=16.4.0;ComponentVersionTo=16.9.0]|[Description=Upgrade the cluster specify component postgresql service version from 16.4.0 to 16.9.0] [PASSED]|[Upgrade]|[ComponentName=postgresql;ComponentVersionFrom=16.9.0;ComponentVersionTo=16.4.0]|[Description=Upgrade the cluster specify component postgresql service version from 16.9.0 to 16.4.0] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[Failover]|[HA=Pod Kill;ComponentName=postgresql]|[Description=Simulates conditions where pods experience kill for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to kill.] [PASSED]|[No-Failover]|[HA=IOChaos AttrOverride;Durations=2m;]|[Description=Simulates conditions where pods experience IO attribute override faults either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to IO attribute override.] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Backup]|[BackupMethod=volume-snapshot]|[Description=The cluster volume-snapshot Backup] [PASSED]|[Restore]|[BackupMethod=volume-snapshot]|[Description=The cluster volume-snapshot Restore] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=volume-snapshot]|[Description=Delete the volume-snapshot restore cluster] [PASSED]|[Backup]|[BackupMethod=pg-basebackup]|[Description=The cluster pg-basebackup Backup] [PASSED]|[Restore]|[BackupMethod=pg-basebackup]|[Description=The cluster pg-basebackup Restore] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=pg-basebackup]|[Description=Delete the pg-basebackup restore cluster] [PASSED]|[RebuildInstance]|[ComponentName=postgresql]|[Description=Rebuild the cluster instance specify component postgresql] [PASSED]|[Backup]|[Schedule=true;BackupMethod=pg-basebackup]|[Description=The cluster Schedule pg-basebackup Backup] [PASSED]|[Restore]|[Schedule=true;BackupMethod=pg-basebackup]|[Description=The cluster Schedule pg-basebackup Restore] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[Schedule=true;BackupMethod=pg-basebackup]|[Description=Delete the Schedule pg-basebackup restore cluster] [PASSED]|[Expose]|[Disable=true;TYPE=intranet;ComponentName=postgresql]|[Description=Expose Disable the intranet service with postgresql component] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]