source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-pkuvx ` `kubectl create namespace ns-pkuvx` namespace/ns-pkuvx created create namespace ns-pkuvx done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.0` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 33.6M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 33.6M 100 33.6M 0 0 50.6M 0 --:--:-- --:--:-- --:--:-- 233M kbcli installed successfully. Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 1.0.0 kbcli: 1.0.0 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.0 done Kubernetes: v1.32.5-eks-5d4a308 KubeBlocks: 1.0.0 kbcli: 1.0.0 Kubernetes Env: v1.32.5-eks-5d4a308 check snapshot controller check snapshot controller done eks default-vsc found POD_RESOURCES: No resources found found default storage class: gp3 KubeBlocks version is:1.0.0 skip upgrade KubeBlocks current KubeBlocks version: 1.0.0 Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:postgresql set component version set component version:postgresql set service versions:16.4.0,15.7.0,14.8.0,14.7.2,12.15.0,12.14.1,12.14.0 set service versions sorted:12.14.0,12.14.1,12.15.0,14.7.2,14.8.0,15.7.0,16.4.0 set postgresql component definition set postgresql component definition postgresql-16-1.0.0-alpha.0 set replicas first:2,12.14.0|2,12.14.1|2,12.15.0|2,14.7.2|2,14.8.0|2,15.7.0|2,16.4.0 set replicas third:2,12.15.0 set replicas fourth:2,12.14.0 set minimum cmpv service version set minimum cmpv service version replicas:2,12.14.0 REPORT_COUNT:1 CLUSTER_TOPOLOGY:replication topology replication found in cluster definition postgresql set postgresql component definition set postgresql component definition postgresql-12-1.0.0-alpha.0 LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 3 No resources found in ns-pkuvx namespace. termination_policy:Delete create 2 replica Delete postgresql cluster check component definition set component definition by component version check cmpd by labels set component definition1: postgresql-12-1.0.0-alpha.0 by component version:postgresql apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: postgres-cyetms namespace: ns-pkuvx spec: clusterDef: postgresql topology: replication terminationPolicy: Delete componentSpecs: - name: postgresql serviceVersion: 12.14.0 labels: apps.kubeblocks.postgres.patroni/scope: postgres-cyetms-postgresql replicas: 2 disableExporter: true resources: limits: cpu: 100m memory: 0.5Gi requests: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi `kubectl apply -f test_create_postgres-cyetms.yaml` cluster.apps.kubeblocks.io/postgres-cyetms created apply test_create_postgres-cyetms.yaml Success `rm -rf test_create_postgres-cyetms.yaml` check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Creating May 28,2025 11:35 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-5-153.us-west-2.compute.internal/172.31.5.153 May 28,2025 11:35 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-9-41.us-west-2.compute.internal/172.31.9.41 May 28,2025 11:35 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=postgres-cyetms` set secret: postgres-cyetms-postgresql-account-postgres `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:G18jFmD652;DB_PORT:5432;DB_DATABASE:postgres check pod postgres-cyetms-postgresql-1 container_name postgresql exist password G18jFmD652 check pod postgres-cyetms-postgresql-1 container_name pgbouncer exist password G18jFmD652 check pod postgres-cyetms-postgresql-1 container_name kbagent exist password G18jFmD652 check pod postgres-cyetms-postgresql-1 container_name config-manager exist password G18jFmD652 No container logs contain secret password. describe cluster `kbcli cluster describe postgres-cyetms --namespace ns-pkuvx ` Name: postgres-cyetms Created Time: May 28,2025 11:35 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-pkuvx postgresql replication Running Delete Endpoints: COMPONENT INTERNAL EXTERNAL postgresql postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local:5432 postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local:6432 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME postgresql 12.14.0 postgres-cyetms-postgresql-0 secondary Running us-west-2a ip-172-31-5-153.us-west-2.compute.internal/172.31.5.153 May 28,2025 11:35 UTC+0800 postgresql 12.14.0 postgres-cyetms-postgresql-1 primary Running us-west-2a ip-172-31-9-41.us-west-2.compute.internal/172.31.9.41 May 28,2025 11:35 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS postgresql 100m / 100m 512Mi / 512Mi data:3Gi kb-default-sc Images: COMPONENT COMPONENT-DEFINITION IMAGE postgresql postgresql-12-1.0.0-alpha.0 docker.io/apecloud/spilo:12.14.0-pgvector-v0.6.1 docker.io/apecloud/pgbouncer:1.19.0 docker.io/apecloud/spilo:12.15.0-pgvector-v0.6.1 docker.io/apecloud/kubeblocks-tools:1.0.0 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-pkuvx postgres-cyetms `kbcli cluster label postgres-cyetms app.kubernetes.io/instance- --namespace ns-pkuvx ` label "app.kubernetes.io/instance" not found. `kbcli cluster label postgres-cyetms app.kubernetes.io/instance=postgres-cyetms --namespace ns-pkuvx ` `kbcli cluster label postgres-cyetms --list --namespace ns-pkuvx ` NAME NAMESPACE LABELS postgres-cyetms ns-pkuvx app.kubernetes.io/instance=postgres-cyetms clusterdefinition.kubeblocks.io/name=postgresql label cluster app.kubernetes.io/instance=postgres-cyetms Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=postgres-cyetms --namespace ns-pkuvx ` `kbcli cluster label postgres-cyetms --list --namespace ns-pkuvx ` NAME NAMESPACE LABELS postgres-cyetms ns-pkuvx app.kubernetes.io/instance=postgres-cyetms case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=postgresql label cluster case.name=kbcli.test1 Success `kbcli cluster label postgres-cyetms case.name=kbcli.test2 --overwrite --namespace ns-pkuvx ` `kbcli cluster label postgres-cyetms --list --namespace ns-pkuvx ` NAME NAMESPACE LABELS postgres-cyetms ns-pkuvx app.kubernetes.io/instance=postgres-cyetms case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=postgresql label cluster case.name=kbcli.test2 Success `kbcli cluster label postgres-cyetms case.name- --namespace ns-pkuvx ` `kbcli cluster label postgres-cyetms --list --namespace ns-pkuvx ` NAME NAMESPACE LABELS postgres-cyetms ns-pkuvx app.kubernetes.io/instance=postgres-cyetms clusterdefinition.kubeblocks.io/name=postgresql delete cluster label case.name Success cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file CREATE EXTENSION Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | echo '\dx;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-postgres-cyetms" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-postgres-cyetms --namespace ns-pkuvx ` Error from server (NotFound): pods "test-db-client-executionloop-postgres-cyetms" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-postgres-cyetms" not found `kubectl get secrets -l app.kubernetes.io/instance=postgres-cyetms` set secret: postgres-cyetms-postgresql-account-postgres `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:G18jFmD652;DB_PORT:5432;DB_DATABASE:postgres apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-postgres-cyetms namespace: ns-pkuvx spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local" - "--user" - "postgres" - "--password" - "G18jFmD652" - "--port" - "5432" - "--dbtype" - "postgresql" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-postgres-cyetms.yaml` pod/test-db-client-executionloop-postgres-cyetms created apply test-db-client-executionloop-postgres-cyetms.yaml Success `rm -rf test-db-client-executionloop-postgres-cyetms.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 10s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 16s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 22s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 28s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 33s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 39s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 45s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 51s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 56s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 1/1 Running 0 62s check pod test-db-client-executionloop-postgres-cyetms status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-postgres-cyetms 0/1 Completed 0 68s check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-5-153.us-west-2.compute.internal/172.31.5.153 May 28,2025 11:35 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-9-41.us-west-2.compute.internal/172.31.9.41 May 28,2025 11:35 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done --host postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local --user postgres --password G18jFmD652 --port 5432 --dbtype postgresql --test executionloop --duration 60 --interval 1 SLF4J(I): Connected with provider of type [ch.qos.logback.classic.spi.LogbackServiceProvider] 03:39:37.802 [main] DEBUG com.clickhouse.jdbc.ClickHouseDriver -- ClickHouse Driver 0.0.0.0(JDBC: 0.0.0.0) registered WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.postgresql.jdbc.TimestampUtils (file:/app/oneclient-1.0-all.jar) to field java.util.TimeZone.defaultTimeZone WARNING: Please consider reporting this to the maintainers of org.postgresql.jdbc.TimestampUtils WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Execution loop start: create databases executions_loop CREATE DATABASE executions_loop; reconnect connection executions_loop drop table executions_loop_table DROP TABLE IF EXISTS executions_loop_table; create table executions_loop_table CREATE TABLE IF NOT EXISTS executions_loop_table (id SERIAL PRIMARY KEY, value TEXT, tinyint_col SMALLINT, smallint_col SMALLINT, integer_col INTEGER, bigint_col BIGINT, real_col REAL, double_col DOUBLE PRECISION, numeric_col NUMERIC(10, 2), date_col DATE, time_col TIME, timestamp_col TIMESTAMP, timestamptz_col TIMESTAMP WITH TIME ZONE, interval_col INTERVAL, boolean_col BOOLEAN, char_col CHAR(10), varchar_col VARCHAR(255), text_col TEXT, bytea_col BYTEA, uuid_col UUID, json_col JSON, jsonb_col JSONB, xml_col XML, enum_col VARCHAR(10) CHECK (enum_col IN ('Option1', 'Option2', 'Option3')), set_col VARCHAR(255) CHECK (set_col IN ('Value1', 'Value2', 'Value3')), int_array_col INTEGER[], text_array_col TEXT[], point_col POINT, line_col LINE, lseg_col LSEG, box_col BOX, path_col PATH, polygon_col POLYGON, circle_col CIRCLE, cidr_col CIDR, inet_col INET, macaddr_col MACADDR, macaddr8_col MACADDR8, bit_col BIT(8), bit_var_col BIT VARYING(8), varbit_col BIT VARYING(8), money_col MONEY, oid_col OID, regproc_col REGPROC, regprocedure_col REGPROCEDURE, regoper_col REGOPER, regoperator_col REGOPERATOR, regclass_col REGCLASS, regtype_col REGTYPE, regrole_col REGROLE, regnamespace_col REGNAMESPACE, regconfig_col REGCONFIG, regdictionary_col REGDICTIONARY ); Execution loop start:INSERT INTO executions_loop_table (value, tinyint_col, smallint_col, integer_col, bigint_col, real_col, double_col, numeric_col, date_col, time_col, timestamp_col, timestamptz_col, interval_col, boolean_col, char_col, varchar_col, text_col, bytea_col, uuid_col, json_col, jsonb_col, xml_col, enum_col, set_col, int_array_col, text_array_col, point_col, line_col, lseg_col, box_col, path_col, polygon_col, circle_col, cidr_col, inet_col, macaddr_col, macaddr8_col, bit_col, bit_var_col, varbit_col, money_col, oid_col, regproc_col, regprocedure_col, regoper_col, regoperator_col, regclass_col, regtype_col, regrole_col, regnamespace_col, regconfig_col, regdictionary_col) VALUES ('executions_loop_test_1', 78, 7542, -1287439883, -3366115600242101884, 0.7863183, 0.5127483651324556, 12.107997286023409, '2025-05-28', '03:39:38', '2025-05-28 03:39:38.992', CURRENT_TIMESTAMP, '1 hours 56 minutes 4 seconds', TRUE, 'mEWH038wRb', 'iMJDGJfYwxB55Pt8mIr00KLfvY4Ocy1PLXI9yEKbczEiZMPAbVNIOdSiUZ2lryfMF4Nd7iG0RtgBPEjxTVJSuwXMte5vnhgEZtJWZOuR9H0JrT9r3IsBDv6V6mosusQtHNcSi0Avtl3qDlHRLsXQVuTfS0RHIKreYp9ImgWYkTAHlPjj4fTkwqDnjevRlpQxy1TrPtZp9DHYM91GX1AUp309SmslepXhns4lhvLyZ2dsShatwN4HMteGowrmZNs', 'thHz3o2uYW09l0rLmZLpqPTH4w6c9x161xym53Dr8c7lqfJdWY4sbqZCb6khSFc5YHWpDMyQmJqHykjQqxKKe5imsM8jf9SawoqQMnEn6NXBTRZMKb8SbtFWBsO6zaIobQB6NqpN0pZPDxllkt5cp2Lc6WWnFkHk489kpdNc8hZN5erbeNZ44O6I7NtrSdltahEqZyHYP6IaFGuhaJZvD9OQFnSWDhm4YExPAjGA4dQjIjFnYkUHIB4vnnNFl8s', decode('d52e40063aa4809ffe58', 'hex'), '53a302e9-a689-428f-bfd4-235e8383596d', '***"key1": "oYTNdiWFlG", "key2": 72***', '***"key1": "SJ7xK5QxFO", "key2": 1***', 'KiTTHG7QWf85', 'Option3', 'Value1', ARRAY[70, 20, 76], ARRAY['NV57c1BleC', 'GdBq1rsBnr', 'coQyyhq2NI'], '(83.36312677126176, 23.17127173808534)', '***46.117855204, 32.1390268164909, 48.120474550094805***', '[(59.694898522045, 79.02629425757033), (33.19900600250472, 75.36467318852601)]', '((76.23962585563736, 39.155793072026235), (68.23442985435946, 94.49149837339898))', '((62.65679663422448, 39.392065095816605), (49.953694359135994, 10.056398288583656), (33.35699109450313, 65.44083961579531))', '((14.80650791992274, 29.06026627170921), (62.635957550747875, 88.19592334758337), (2.6194965972967443, 58.571459064984644), (66.09744748134266, 24.338103244589536))', '<(75.614798, 43.116037), 4.926308>', '192.168.182.0/24', '192.168.227.185', '08:00:2b:01:02:03', '08:00:2b:01:02:03:04:05', B'10101010', B'10101010', B'10101010', '$564.813146386182', -2119859647, 'acos', abs(1), '#-', +1, 'pg_class', 'integer', 'postgres', 'pg_catalog', 'simple', 'english_stem' ); [ 1s ] executions total: 4 successful: 4 failed: 0 disconnect: 0 [ 2s ] executions total: 44 successful: 44 failed: 0 disconnect: 0 [ 3s ] executions total: 111 successful: 111 failed: 0 disconnect: 0 [ 4s ] executions total: 139 successful: 139 failed: 0 disconnect: 0 [ 5s ] executions total: 189 successful: 189 failed: 0 disconnect: 0 [ 6s ] executions total: 250 successful: 250 failed: 0 disconnect: 0 [ 7s ] executions total: 277 successful: 277 failed: 0 disconnect: 0 [ 8s ] executions total: 329 successful: 329 failed: 0 disconnect: 0 [ 9s ] executions total: 396 successful: 396 failed: 0 disconnect: 0 [ 10s ] executions total: 420 successful: 420 failed: 0 disconnect: 0 [ 11s ] executions total: 457 successful: 457 failed: 0 disconnect: 0 [ 12s ] executions total: 498 successful: 498 failed: 0 disconnect: 0 [ 13s ] executions total: 556 successful: 556 failed: 0 disconnect: 0 [ 14s ] executions total: 620 successful: 620 failed: 0 disconnect: 0 [ 15s ] executions total: 641 successful: 641 failed: 0 disconnect: 0 [ 16s ] executions total: 657 successful: 657 failed: 0 disconnect: 0 [ 17s ] executions total: 668 successful: 668 failed: 0 disconnect: 0 [ 18s ] executions total: 697 successful: 697 failed: 0 disconnect: 0 [ 19s ] executions total: 754 successful: 754 failed: 0 disconnect: 0 [ 20s ] executions total: 804 successful: 804 failed: 0 disconnect: 0 [ 21s ] executions total: 848 successful: 848 failed: 0 disconnect: 0 [ 22s ] executions total: 889 successful: 889 failed: 0 disconnect: 0 [ 23s ] executions total: 909 successful: 909 failed: 0 disconnect: 0 [ 24s ] executions total: 960 successful: 960 failed: 0 disconnect: 0 [ 25s ] executions total: 1019 successful: 1019 failed: 0 disconnect: 0 [ 26s ] executions total: 1067 successful: 1067 failed: 0 disconnect: 0 [ 27s ] executions total: 1130 successful: 1130 failed: 0 disconnect: 0 [ 28s ] executions total: 1184 successful: 1184 failed: 0 disconnect: 0 [ 29s ] executions total: 1200 successful: 1200 failed: 0 disconnect: 0 [ 30s ] executions total: 1246 successful: 1246 failed: 0 disconnect: 0 [ 31s ] executions total: 1288 successful: 1288 failed: 0 disconnect: 0 [ 32s ] executions total: 1298 successful: 1298 failed: 0 disconnect: 0 [ 33s ] executions total: 1355 successful: 1355 failed: 0 disconnect: 0 [ 34s ] executions total: 1400 successful: 1400 failed: 0 disconnect: 0 [ 35s ] executions total: 1430 successful: 1430 failed: 0 disconnect: 0 [ 36s ] executions total: 1479 successful: 1479 failed: 0 disconnect: 0 [ 37s ] executions total: 1536 successful: 1536 failed: 0 disconnect: 0 [ 38s ] executions total: 1597 successful: 1597 failed: 0 disconnect: 0 [ 39s ] executions total: 1652 successful: 1652 failed: 0 disconnect: 0 [ 40s ] executions total: 1693 successful: 1693 failed: 0 disconnect: 0 [ 41s ] executions total: 1703 successful: 1703 failed: 0 disconnect: 0 [ 42s ] executions total: 1754 successful: 1754 failed: 0 disconnect: 0 [ 43s ] executions total: 1819 successful: 1819 failed: 0 disconnect: 0 [ 44s ] executions total: 1842 successful: 1842 failed: 0 disconnect: 0 [ 45s ] executions total: 1862 successful: 1862 failed: 0 disconnect: 0 [ 46s ] executions total: 1875 successful: 1875 failed: 0 disconnect: 0 [ 47s ] executions total: 1886 successful: 1886 failed: 0 disconnect: 0 [ 48s ] executions total: 1896 successful: 1896 failed: 0 disconnect: 0 [ 49s ] executions total: 1944 successful: 1944 failed: 0 disconnect: 0 [ 50s ] executions total: 2003 successful: 2003 failed: 0 disconnect: 0 [ 51s ] executions total: 2047 successful: 2047 failed: 0 disconnect: 0 [ 52s ] executions total: 2118 successful: 2118 failed: 0 disconnect: 0 [ 53s ] executions total: 2156 successful: 2156 failed: 0 disconnect: 0 [ 54s ] executions total: 2179 successful: 2179 failed: 0 disconnect: 0 [ 55s ] executions total: 2230 successful: 2230 failed: 0 disconnect: 0 [ 56s ] executions total: 2297 successful: 2297 failed: 0 disconnect: 0 [ 57s ] executions total: 2360 successful: 2360 failed: 0 disconnect: 0 [ 58s ] executions total: 2418 successful: 2418 failed: 0 disconnect: 0 [ 60s ] executions total: 2456 successful: 2456 failed: 0 disconnect: 0 Test Result: Total Executions: 2456 Successful Executions: 2456 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: postgresql Host: postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local Port: 5432 Database: Table: User: postgres Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 2456 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-postgres-cyetms --namespace ns-pkuvx ` pod/test-db-client-executionloop-postgres-cyetms patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-postgres-cyetms" force deleted `echo "DROP TABLE IF EXISTS tmp_table; CREATE TABLE IF NOT EXISTS tmp_table (id INT PRIMARY KEY , value text); INSERT INTO tmp_table (id,value) VALUES (1,'gayfp');" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file NOTICE: table "tmp_table" does not exist, skipping DROP TABLE CREATE TABLE INSERT 0 1 add consistent data gayfp Success `echo "DROP TABLE IF EXISTS tmp_table; CREATE TABLE IF NOT EXISTS tmp_table (id INT PRIMARY KEY , value text); INSERT INTO tmp_table (id,value) VALUES (1,'gayfp');" | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: cannot execute DROP TABLE in a read-only transaction ERROR: cannot execute CREATE TABLE in a read-only transaction ERROR: cannot execute INSERT in a read-only transaction check add consistent data readonly Success LB_TYPE is set to: intranet cluster expose check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster expose postgres-cyetms --auto-approve --force=true --type intranet --enable true --components postgresql --role-selector primary --namespace ns-pkuvx ` OpsRequest postgres-cyetms-expose-5r8z2 created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-expose-5r8z2 -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-expose-5r8z2 ns-pkuvx Expose postgres-cyetms postgresql Running 0/1 May 28,2025 11:41 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-5-153.us-west-2.compute.internal/172.31.5.153 May 28,2025 11:35 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-9-41.us-west-2.compute.internal/172.31.9.41 May 28,2025 11:35 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-expose-5r8z2 ns-pkuvx Expose postgres-cyetms postgresql Succeed 1/1 May 28,2025 11:41 UTC+0800 check ops status done ops_status:postgres-cyetms-expose-5r8z2 ns-pkuvx Expose postgres-cyetms postgresql Succeed 1/1 May 28,2025 11:41 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-expose-5r8z2 --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-expose-5r8z2 patched `kbcli cluster delete-ops --name postgres-cyetms-expose-5r8z2 --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-expose-5r8z2 deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test switchover cluster promote check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster promote postgres-cyetms --auto-approve --force=true --instance postgres-cyetms-postgresql-1 --candidate postgres-cyetms-postgresql-0 --namespace ns-pkuvx ` OpsRequest postgres-cyetms-switchover-2fcv4 created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-switchover-2fcv4 -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-switchover-2fcv4 ns-pkuvx Switchover postgres-cyetms postgres-cyetms-postgresql Running -/- May 28,2025 11:41 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-5-153.us-west-2.compute.internal/172.31.5.153 May 28,2025 11:35 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-9-41.us-west-2.compute.internal/172.31.9.41 May 28,2025 11:35 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-switchover-2fcv4 ns-pkuvx Switchover postgres-cyetms postgres-cyetms-postgresql Succeed 1/1 May 28,2025 11:41 UTC+0800 check ops status done ops_status:postgres-cyetms-switchover-2fcv4 ns-pkuvx Switchover postgres-cyetms postgres-cyetms-postgresql Succeed 1/1 May 28,2025 11:41 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-switchover-2fcv4 --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-switchover-2fcv4 patched `kbcli cluster delete-ops --name postgres-cyetms-switchover-2fcv4 --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-switchover-2fcv4 deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success switchover pod:postgres-cyetms-postgresql-0 switchover success `kubectl get secrets -l app.kubernetes.io/instance=postgres-cyetms` set secret: postgres-cyetms-postgresql-account-postgres `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:G18jFmD652;DB_PORT:5432;DB_DATABASE:postgres `create database benchtest;` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file NOTICE: database "benchtest" does not exist, skipping return msg:DROP DATABASE CREATE DATABASE apiVersion: v1 kind: Pod metadata: name: benchtest-postgres-cyetms namespace: ns-pkuvx spec: containers: - name: test-sysbench imagePullPolicy: IfNotPresent image: docker.io/apecloud/customsuites:latest env: - name: TYPE value: "2" - name: FLAG value: "0" - name: CONFIGS value: "mode:all,driver:pgsql,host:postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local,user:postgres,password:G18jFmD652,port:5432,db:benchtest,tables:5,threads:4,times:10,size:1000,type:oltp_read_write" restartPolicy: Never `kubectl apply -f benchtest-postgres-cyetms.yaml` pod/benchtest-postgres-cyetms created apply benchtest-postgres-cyetms.yaml Success `rm -rf benchtest-postgres-cyetms.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 0/1 ContainerCreating 0 0s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 0/1 ContainerCreating 0 5s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 0/1 ContainerCreating 0 11s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 0/1 ContainerCreating 0 16s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 0/1 ContainerCreating 0 22s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 1/1 Running 0 28s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 1/1 Running 0 34s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 1/1 Running 0 39s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 1/1 Running 0 45s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 1/1 Running 0 51s check pod benchtest-postgres-cyetms status done pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 0/1 Completed 0 57s `kubectl logs benchtest-postgres-cyetms --tail 30 --namespace ns-pkuvx ` [ 7s ] thds: 4 tps: 4.00 qps: 66.02 (r/w/o: 39.01/18.00/9.00) lat (ms,99%): 1708.63 err/s: 0.00 reconn/s: 0.00 [ 8s ] thds: 4 tps: 9.00 qps: 187.97 (r/w/o: 138.98/32.00/17.00) lat (ms,99%): 893.56 err/s: 0.00 reconn/s: 0.00 [ 9s ] thds: 4 tps: 11.99 qps: 252.79 (r/w/o: 172.86/53.96/25.98) lat (ms,99%): 601.29 err/s: 0.00 reconn/s: 0.00 [ 10s ] thds: 4 tps: 14.01 qps: 271.23 (r/w/o: 191.16/53.05/27.02) lat (ms,99%): 484.44 err/s: 0.00 reconn/s: 0.00 SQL statistics: queries performed: read: 1162 write: 328 other: 168 total: 1658 transactions: 82 (7.89 per sec.) queries: 1658 (159.45 per sec.) ignored errors: 1 (0.10 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 10.3967s total number of events: 82 Latency (ms): min: 10.00 avg: 498.19 max: 1906.08 99th percentile: 1803.47 sum: 40851.31 Threads fairness: events (avg/stddev): 20.5000/1.66 execution time (avg/stddev): 10.2128/0.13 `kubectl delete pod benchtest-postgres-cyetms --force --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "benchtest-postgres-cyetms" force deleted LB_TYPE is set to: intranet No resources found in ns-pkuvx namespace. `kubectl get secrets -l app.kubernetes.io/instance=postgres-cyetms` set secret: postgres-cyetms-postgresql-account-postgres `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:G18jFmD652;DB_PORT:5432;DB_DATABASE:postgres `create database benchtest;` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file return msg:DROP DATABASE CREATE DATABASE apiVersion: v1 kind: Pod metadata: name: benchtest-postgres-cyetms namespace: ns-pkuvx spec: containers: - name: test-sysbench imagePullPolicy: IfNotPresent image: docker.io/apecloud/customsuites:latest env: - name: TYPE value: "2" - name: FLAG value: "0" - name: CONFIGS value: "mode:all,driver:pgsql,host:a8712f9c43bcf425789136bd77730647-7f2445a7514031d7.elb.us-west-2.amazonaws.com,user:postgres,password:G18jFmD652,port:5432,db:benchtest,tables:5,threads:4,times:10,size:1000,type:oltp_read_write" restartPolicy: Never `kubectl apply -f benchtest-postgres-cyetms.yaml` pod/benchtest-postgres-cyetms created apply benchtest-postgres-cyetms.yaml Success `rm -rf benchtest-postgres-cyetms.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 0/1 ContainerCreating 0 1s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 1/1 Running 0 5s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 1/1 Running 0 11s pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 1/1 Running 0 17s check pod benchtest-postgres-cyetms status done pod_status:NAME READY STATUS RESTARTS AGE benchtest-postgres-cyetms 0/1 Completed 0 23s `kubectl logs benchtest-postgres-cyetms --tail 30 --namespace ns-pkuvx ` [ 7s ] thds: 4 tps: 11.00 qps: 241.94 (r/w/o: 172.96/46.99/21.99) lat (ms,99%): 601.29 err/s: 0.00 reconn/s: 0.00 [ 8s ] thds: 4 tps: 13.00 qps: 261.07 (r/w/o: 182.05/51.01/28.01) lat (ms,99%): 893.56 err/s: 0.00 reconn/s: 0.00 [ 9s ] thds: 4 tps: 14.00 qps: 292.00 (r/w/o: 204.00/60.00/28.00) lat (ms,99%): 694.45 err/s: 0.00 reconn/s: 0.00 [ 10s ] thds: 4 tps: 12.00 qps: 246.96 (r/w/o: 174.97/47.99/24.00) lat (ms,99%): 590.56 err/s: 0.00 reconn/s: 0.00 SQL statistics: queries performed: read: 1638 write: 465 other: 237 total: 2340 transactions: 117 (11.36 per sec.) queries: 2340 (227.13 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 10.3012s total number of events: 117 Latency (ms): min: 12.10 avg: 349.45 max: 1400.01 99th percentile: 909.80 sum: 40886.20 Threads fairness: events (avg/stddev): 29.2500/3.70 execution time (avg/stddev): 10.2216/0.08 `kubectl delete pod benchtest-postgres-cyetms --force --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "benchtest-postgres-cyetms" force deleted test failover drainnode check node drain check node drain success kubectl get pod postgres-cyetms-postgresql-0 --namespace ns-pkuvx -o jsonpath='***.spec.nodeName***' get node name:ip-172-31-5-153.us-west-2.compute.internal success check if multiple pods are on the same node kubectl get pod postgres-cyetms-postgresql-1 --namespace ns-pkuvx -o jsonpath='***.spec.nodeName***' get node name:ip-172-31-9-41.us-west-2.compute.internal success kubectl drain ip-172-31-5-153.us-west-2.compute.internal --delete-emptydir-data --ignore-daemonsets --force --grace-period 0 --timeout 60s node/ip-172-31-5-153.us-west-2.compute.internal cordoned Warning: ignoring DaemonSet-managed Pods: chaos-mesh/chaos-daemon-q2p95, kube-system/aws-node-zsbmq, kube-system/ebs-csi-node-wxk78, kube-system/kube-proxy-89wtr evicting pod ns-wscoz/kafka-htlrnp-kafka-broker-0 evicting pod ns-pkuvx/postgres-cyetms-postgresql-0 evicting pod ns-rffbw/mongodb-ufulcp-mongodb-1 pod/kafka-htlrnp-kafka-broker-0 evicted pod/postgres-cyetms-postgresql-0 evicted pod/mongodb-ufulcp-mongodb-1 evicted node/ip-172-31-5-153.us-west-2.compute.internal drained kubectl uncordon ip-172-31-5-153.us-west-2.compute.internal node/ip-172-31-5-153.us-west-2.compute.internal uncordoned check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:47 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-9-41.us-west-2.compute.internal/172.31.9.41 May 28,2025 11:35 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check failover pod name failover pod name:postgres-cyetms-postgresql-1 failover drainnode Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart postgres-cyetms --auto-approve --force=true --namespace ns-pkuvx ` OpsRequest postgres-cyetms-restart-74kln created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-restart-74kln -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-restart-74kln ns-pkuvx Restart postgres-cyetms postgresql Running 0/2 May 28,2025 11:48 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:49 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-restart-74kln ns-pkuvx Restart postgres-cyetms postgresql Succeed 2/2 May 28,2025 11:48 UTC+0800 check ops status done ops_status:postgres-cyetms-restart-74kln ns-pkuvx Restart postgres-cyetms postgresql Succeed 2/2 May 28,2025 11:48 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-restart-74kln --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-restart-74kln patched `kbcli cluster delete-ops --name postgres-cyetms-restart-74kln --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-restart-74kln deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster postgresql scale-out check cluster status before ops check cluster status done cluster_status:Running No resources found in postgres-cyetms namespace. `kbcli cluster scale-out postgres-cyetms --auto-approve --force=true --components postgresql --replicas 1 --namespace ns-pkuvx ` OpsRequest postgres-cyetms-horizontalscaling-8gb2q created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-horizontalscaling-8gb2q -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-horizontalscaling-8gb2q ns-pkuvx HorizontalScaling postgres-cyetms postgresql Running 0/1 May 28,2025 11:52 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:49 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:51 UTC+0800 postgres-cyetms-postgresql-2 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-5-153.us-west-2.compute.internal/172.31.5.153 May 28,2025 11:52 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 postgres-cyetms-postgresql-2 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done No resources found in postgres-cyetms namespace. check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-horizontalscaling-8gb2q ns-pkuvx HorizontalScaling postgres-cyetms postgresql Succeed 1/1 May 28,2025 11:52 UTC+0800 check ops status done ops_status:postgres-cyetms-horizontalscaling-8gb2q ns-pkuvx HorizontalScaling postgres-cyetms postgresql Succeed 1/1 May 28,2025 11:52 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-horizontalscaling-8gb2q --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-horizontalscaling-8gb2q patched `kbcli cluster delete-ops --name postgres-cyetms-horizontalscaling-8gb2q --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-horizontalscaling-8gb2q deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster postgresql scale-in check cluster status before ops check cluster status done cluster_status:Running No resources found in postgres-cyetms namespace. `kbcli cluster scale-in postgres-cyetms --auto-approve --force=true --components postgresql --replicas 1 --namespace ns-pkuvx ` OpsRequest postgres-cyetms-horizontalscaling-g8wtd created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-horizontalscaling-g8wtd -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-horizontalscaling-g8wtd ns-pkuvx HorizontalScaling postgres-cyetms postgresql Running 0/1 May 28,2025 11:55 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:49 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:51 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done No resources found in postgres-cyetms namespace. check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-horizontalscaling-g8wtd ns-pkuvx HorizontalScaling postgres-cyetms postgresql Succeed 1/1 May 28,2025 11:55 UTC+0800 check ops status done ops_status:postgres-cyetms-horizontalscaling-g8wtd ns-pkuvx HorizontalScaling postgres-cyetms postgresql Succeed 1/1 May 28,2025 11:55 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-horizontalscaling-g8wtd --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-horizontalscaling-g8wtd patched `kbcli cluster delete-ops --name postgres-cyetms-horizontalscaling-g8wtd --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-horizontalscaling-g8wtd deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale postgres-cyetms --auto-approve --force=true --components postgresql --cpu 200m --memory 0.6Gi --namespace ns-pkuvx ` OpsRequest postgres-cyetms-verticalscaling-wq7qv created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-verticalscaling-wq7qv -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-verticalscaling-wq7qv ns-pkuvx VerticalScaling postgres-cyetms postgresql Running 0/2 May 28,2025 11:56 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-verticalscaling-wq7qv ns-pkuvx VerticalScaling postgres-cyetms postgresql Succeed 2/2 May 28,2025 11:56 UTC+0800 check ops status done ops_status:postgres-cyetms-verticalscaling-wq7qv ns-pkuvx VerticalScaling postgres-cyetms postgresql Succeed 2/2 May 28,2025 11:56 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-verticalscaling-wq7qv --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-verticalscaling-wq7qv patched `kbcli cluster delete-ops --name postgres-cyetms-verticalscaling-wq7qv --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-verticalscaling-wq7qv deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkbandwidthover check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-postgres-cyetms" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkbandwidthover-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-1 action: bandwidth mode: all bandwidth: rate: '1bps' limit: 20971520 buffer: 10000 duration: 2m `kubectl apply -f test-chaos-mesh-networkbandwidthover-postgres-cyetms.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkbandwidthover-postgres-cyetms created apply test-chaos-mesh-networkbandwidthover-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-networkbandwidthover-postgres-cyetms.yaml` networkbandwidthover chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-postgres-cyetms" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-0 failover networkbandwidthover Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster configure component_tmp: postgresql apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-reconfiguring- namespace: ns-pkuvx spec: type: Reconfiguring clusterName: postgres-cyetms force: true reconfigures: - componentName: postgresql parameters: - key: max_connections value: '200' check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-reconfiguring-glzrz created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-reconfiguring-glzrz ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Running -/- May 28,2025 12:03 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-reconfiguring-glzrz ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Succeed -/- May 28,2025 12:03 UTC+0800 check ops status done ops_status:postgres-cyetms-reconfiguring-glzrz ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Succeed -/- May 28,2025 12:03 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-reconfiguring-glzrz --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-reconfiguring-glzrz patched `kbcli cluster delete-ops --name postgres-cyetms-reconfiguring-glzrz --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-reconfiguring-glzrz deleted component_config:postgresql check config variables Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file current value_actual: 200 configure:[max_connections] result actual:[200] equal expected:[200] `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster does not need to check monitor currently check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done test failover connectionstress check node drain check node drain success Error from server (NotFound): pods "test-db-client-connectionstress-postgres-cyetms" not found `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-postgres-cyetms --namespace ns-pkuvx ` Error from server (NotFound): pods "test-db-client-connectionstress-postgres-cyetms" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-connectionstress-postgres-cyetms" not found `kubectl get secrets -l app.kubernetes.io/instance=postgres-cyetms` set secret: postgres-cyetms-postgresql-account-postgres `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:G18jFmD652;DB_PORT:5432;DB_DATABASE:postgres apiVersion: v1 kind: Pod metadata: name: test-db-client-connectionstress-postgres-cyetms namespace: ns-pkuvx spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local" - "--user" - "postgres" - "--password" - "G18jFmD652" - "--port" - "5432" - "--database" - "postgres" - "--dbtype" - "postgresql" - "--test" - "connectionstress" - "--connections" - "200" - "--duration" - "60" restartPolicy: Never `kubectl apply -f test-db-client-connectionstress-postgres-cyetms.yaml` pod/test-db-client-connectionstress-postgres-cyetms created apply test-db-client-connectionstress-postgres-cyetms.yaml Success `rm -rf test-db-client-connectionstress-postgres-cyetms.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-postgres-cyetms 1/1 Running 0 6s check pod test-db-client-connectionstress-postgres-cyetms status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-connectionstress-postgres-cyetms 0/1 Completed 0 11s check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:62) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:113) at com.apecloud.dbtester.tester.TestExecutor.executeTest(TestExecutor.java:34) at OneClient.executeTest(OneClient.java:105) at OneClient.main(OneClient.java:37) 04:04:08.461 [main] DEBUG com.yashandb.conf.ConnectionUrl -- JDBC URL must start with "jdbc:yasdb:" but was: jdbc:postgresql://postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local:5432/postgres?useSSL=false java.io.IOException: Failed to connect to PostgreSQL database: at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:64) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:113) at com.apecloud.dbtester.tester.TestExecutor.executeTest(TestExecutor.java:34) at OneClient.executeTest(OneClient.java:105) at OneClient.main(OneClient.java:37) Caused by: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:62) ... 4 more May 28, 2025 4:04:08 AM org.postgresql.Driver connect SEVERE: Connection error: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:57) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:113) at com.apecloud.dbtester.tester.TestExecutor.executeTest(TestExecutor.java:34) at OneClient.executeTest(OneClient.java:105) at OneClient.main(OneClient.java:37) 04:04:08.547 [main] DEBUG com.yashandb.conf.ConnectionUrl -- JDBC URL must start with "jdbc:yasdb:" but was: jdbc:postgresql://postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local:5432/postgres?useSSL=false Failed to connect to PostgreSQL database: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already Trying with database PostgreSQL. May 28, 2025 4:04:08 AM org.postgresql.Driver connect SEVERE: Connection error: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:62) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:113) at com.apecloud.dbtester.tester.TestExecutor.executeTest(TestExecutor.java:34) at OneClient.executeTest(OneClient.java:105) at OneClient.main(OneClient.java:37) 04:04:08.552 [main] DEBUG com.yashandb.conf.ConnectionUrl -- JDBC URL must start with "jdbc:yasdb:" but was: jdbc:postgresql://postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local:5432/postgres?useSSL=false java.io.IOException: Failed to connect to PostgreSQL database: at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:64) at com.apecloud.dbtester.tester.PostgreSQLTester.connectionStress(PostgreSQLTester.java:113) at com.apecloud.dbtester.tester.TestExecutor.executeTest(TestExecutor.java:34) at OneClient.executeTest(OneClient.java:105) at OneClient.main(OneClient.java:37) Caused by: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:438) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.(PgConnection.java:194) at org.postgresql.Driver.makeConnection(Driver.java:431) at org.postgresql.Driver.connect(Driver.java:247) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(Unknown Source) at com.apecloud.dbtester.tester.PostgreSQLTester.connect(PostgreSQLTester.java:62) Test Result: null Connection Information: Database Type: postgresql Host: postgres-cyetms-postgresql-postgresql.ns-pkuvx.svc.cluster.local ... 4 more Port: 5432 Database: postgres Table: User: postgres Org: Access Mode: mysql Test Type: connectionstress Connection Count: 200 Duration: 60 seconds `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-connectionstress-postgres-cyetms --namespace ns-pkuvx ` pod/test-db-client-connectionstress-postgres-cyetms patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-connectionstress-postgres-cyetms" force deleted check failover pod name failover pod name:postgres-cyetms-postgresql-0 failover connectionstress Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkduplicate check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-postgres-cyetms" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkduplicate-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-0 mode: all action: duplicate duplicate: duplicate: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkduplicate-postgres-cyetms.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkduplicate-postgres-cyetms created apply test-chaos-mesh-networkduplicate-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-networkduplicate-postgres-cyetms.yaml` networkduplicate chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-postgres-cyetms" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-0 failover networkduplicate Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkpartition check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-postgres-cyetms" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkpartition-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-0 action: partition mode: all target: mode: all selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-1 direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkpartition-postgres-cyetms.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkpartition-postgres-cyetms created apply test-chaos-mesh-networkpartition-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-networkpartition-postgres-cyetms.yaml` networkpartition chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-postgres-cyetms" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-0 failover networkpartition Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover fullcpu check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpu-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-postgres-cyetms" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-fullcpu-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-0 mode: all stressors: cpu: workers: 100 load: 100 duration: 2m `kubectl apply -f test-chaos-mesh-fullcpu-postgres-cyetms.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-fullcpu-postgres-cyetms created apply test-chaos-mesh-fullcpu-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-fullcpu-postgres-cyetms.yaml` fullcpu chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpu-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-postgres-cyetms" force deleted Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-0 failover fullcpu Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover timeoffset check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-postgres-cyetms" not found Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: TimeChaos metadata: name: test-chaos-mesh-timeoffset-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-0 mode: all timeOffset: '-10m' clockIds: - CLOCK_REALTIME duration: 2m `kubectl apply -f test-chaos-mesh-timeoffset-postgres-cyetms.yaml` timechaos.chaos-mesh.org/test-chaos-mesh-timeoffset-postgres-cyetms created apply test-chaos-mesh-timeoffset-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-timeoffset-postgres-cyetms.yaml` timeoffset chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 11:58 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-postgres-cyetms" force deleted Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-0 failover timeoffset Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover delete pod:postgres-cyetms-postgresql-0 `kubectl delete pod postgres-cyetms-postgresql-0 --namespace ns-pkuvx ` pod "postgres-cyetms-postgresql-0" deleted check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-128.us-west-2.compute.internal/172.31.13.128 May 28,2025 12:16 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 11:56 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check failover pod name failover pod name:postgres-cyetms-postgresql-1 failover Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster configure component_tmp: postgresql apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-reconfiguring- namespace: ns-pkuvx spec: type: Reconfiguring clusterName: postgres-cyetms force: true reconfigures: - componentName: postgresql parameters: - key: shared_buffers value: '512MB' check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-reconfiguring-5wktq created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-reconfiguring-5wktq ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Running -/- May 28,2025 12:17 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:18 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:19 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-reconfiguring-5wktq ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Succeed -/- May 28,2025 12:17 UTC+0800 check ops status done ops_status:postgres-cyetms-reconfiguring-5wktq ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Succeed -/- May 28,2025 12:17 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-reconfiguring-5wktq --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-reconfiguring-5wktq patched `kbcli cluster delete-ops --name postgres-cyetms-reconfiguring-5wktq --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-reconfiguring-5wktq deleted component_config:postgresql check config variables Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file current value_actual: 512MB configure:[shared_buffers] result actual:[512MB] equal expected:[512MB] `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop postgres-cyetms --auto-approve --force=true --namespace ns-pkuvx ` OpsRequest postgres-cyetms-stop-lwfjj created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-stop-lwfjj -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-stop-lwfjj ns-pkuvx Stop postgres-cyetms postgresql Running 0/2 May 28,2025 12:21 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Stopping May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-stop-lwfjj ns-pkuvx Stop postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:21 UTC+0800 check ops status done ops_status:postgres-cyetms-stop-lwfjj ns-pkuvx Stop postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:21 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-stop-lwfjj --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-stop-lwfjj patched `kbcli cluster delete-ops --name postgres-cyetms-stop-lwfjj --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-stop-lwfjj deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start postgres-cyetms --force=true --namespace ns-pkuvx ` OpsRequest postgres-cyetms-start-k7ms9 created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-start-k7ms9 -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-start-k7ms9 ns-pkuvx Start postgres-cyetms Running -/- May 28,2025 12:22 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-start-k7ms9 ns-pkuvx Start postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:22 UTC+0800 check ops status done ops_status:postgres-cyetms-start-k7ms9 ns-pkuvx Start postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:22 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-start-k7ms9 --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-start-k7ms9 patched `kbcli cluster delete-ops --name postgres-cyetms-start-k7ms9 --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-start-k7ms9 deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover oom check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-cyetms" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-oom-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-0 mode: all stressors: memory: workers: 1 size: "100GB" oomScoreAdj: -1000 duration: 2m `kubectl apply -f test-chaos-mesh-oom-postgres-cyetms.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-oom-postgres-cyetms created apply test-chaos-mesh-oom-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-oom-postgres-cyetms.yaml` check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` connect checking... connect checking... connect checking... connect checking... check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-oom-postgres-cyetms" force deleted stresschaos.chaos-mesh.org/test-chaos-mesh-oom-postgres-cyetms patched check failover pod name failover pod name:postgres-cyetms-postgresql-1 failover oom Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover kill1 check node drain check node drain success `kill 1` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file exec return message: check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check failover pod name failover pod name:postgres-cyetms-postgresql-0 failover kill1 Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover podfailure check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-postgres-cyetms" not found Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podfailure-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-0 mode: all action: pod-failure duration: 2m `kubectl apply -f test-chaos-mesh-podfailure-postgres-cyetms.yaml` podchaos.chaos-mesh.org/test-chaos-mesh-podfailure-postgres-cyetms created apply test-chaos-mesh-podfailure-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-podfailure-postgres-cyetms.yaml` podfailure chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Failed May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-postgres-cyetms" force deleted Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-1 failover podfailure Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success `kubectl get pvc -l app.kubernetes.io/instance=postgres-cyetms,apps.kubeblocks.io/component-name=postgresql,apps.kubeblocks.io/vct-name=data --namespace ns-pkuvx ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in postgres-cyetms namespace. `kbcli cluster volume-expand postgres-cyetms --auto-approve --force=true --components postgresql --volume-claim-templates data --storage 5Gi --namespace ns-pkuvx ` OpsRequest postgres-cyetms-volumeexpansion-6qhxd created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-volumeexpansion-6qhxd -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-volumeexpansion-6qhxd ns-pkuvx VolumeExpansion postgres-cyetms postgresql Running 0/2 May 28,2025 12:28 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done No resources found in postgres-cyetms namespace. check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-volumeexpansion-6qhxd ns-pkuvx VolumeExpansion postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:28 UTC+0800 check ops status done ops_status:postgres-cyetms-volumeexpansion-6qhxd ns-pkuvx VolumeExpansion postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:28 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-volumeexpansion-6qhxd --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-volumeexpansion-6qhxd patched `kbcli cluster delete-ops --name postgres-cyetms-volumeexpansion-6qhxd --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-volumeexpansion-6qhxd deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover dnsrandom check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-postgres-cyetms" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnsrandom-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-1 mode: all action: random duration: 2m `kubectl apply -f test-chaos-mesh-dnsrandom-postgres-cyetms.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnsrandom-postgres-cyetms created apply test-chaos-mesh-dnsrandom-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-dnsrandom-postgres-cyetms.yaml` dnsrandom chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-postgres-cyetms" force deleted Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-1 failover dnsrandom Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkdelay check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelay-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-postgres-cyetms" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkdelay-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-1 mode: all action: delay delay: latency: 2000ms correlation: '100' jitter: 0ms direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkdelay-postgres-cyetms.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkdelay-postgres-cyetms created apply test-chaos-mesh-networkdelay-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-networkdelay-postgres-cyetms.yaml` networkdelay chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelay-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-postgres-cyetms" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-1 failover networkdelay Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover dnserror check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-postgres-cyetms" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnserror-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-1 mode: all action: error duration: 2m `kubectl apply -f test-chaos-mesh-dnserror-postgres-cyetms.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnserror-postgres-cyetms created apply test-chaos-mesh-dnserror-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-dnserror-postgres-cyetms.yaml` dnserror chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-postgres-cyetms" force deleted Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-1 failover dnserror Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networkcorruptover check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-postgres-cyetms" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkcorruptover-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-1 mode: all action: corrupt corrupt: corrupt: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkcorruptover-postgres-cyetms.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkcorruptover-postgres-cyetms created apply test-chaos-mesh-networkcorruptover-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-networkcorruptover-postgres-cyetms.yaml` networkcorruptover chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role No resources found in ns-pkuvx namespace. primary: postgres-cyetms-postgresql-0 postgres-cyetms-postgresql-1;secondary: check cluster role done primary: postgres-cyetms-postgresql-0 postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-postgres-cyetms" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-0 failover networkcorruptover Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success test failover networklossover check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-postgres-cyetms" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-postgres-cyetms" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networklossover-postgres-cyetms namespace: ns-pkuvx spec: selector: namespaces: - ns-pkuvx labelSelectors: apps.kubeblocks.io/pod-name: postgres-cyetms-postgresql-0 mode: all action: loss loss: loss: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networklossover-postgres-cyetms.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networklossover-postgres-cyetms created apply test-chaos-mesh-networklossover-postgres-cyetms.yaml Success `rm -rf test-chaos-mesh-networklossover-postgres-cyetms.yaml` networklossover chaos test waiting 120 seconds check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-postgres-cyetms --namespace ns-pkuvx ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-postgres-cyetms" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-postgres-cyetms" not found check failover pod name failover pod name:postgres-cyetms-postgresql-1 failover networklossover Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory Is the server running locally and accepting connections on that socket? command terminated with exit code 2 checking cluster readonly data consistent... check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: the database system is starting up command terminated with exit code 2 checking cluster readonly data consistent... check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cmpv upgrade service version:2,12.14.0|2,12.14.1|2,12.15.0|2,14.7.2|2,14.8.0|2,15.7.0|2,16.4.0 set latest cmpv service version latest service version:12.15.0 cmpv service version upgrade and downgrade upgrade from:12.14.0 to service version:12.15.0 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-upgrade-cmpv- namespace: ns-pkuvx spec: clusterName: postgres-cyetms upgrade: components: - componentName: postgresql serviceVersion: 12.15.0 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-vt84h created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-vt84h ns-pkuvx Upgrade postgres-cyetms postgresql Running 0/2 May 28,2025 12:42 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-vt84h ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:42 UTC+0800 check ops status done ops_status:postgres-cyetms-upgrade-cmpv-vt84h ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:42 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-upgrade-cmpv-vt84h --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-vt84h patched `kbcli cluster delete-ops --name postgres-cyetms-upgrade-cmpv-vt84h --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-upgrade-cmpv-vt84h deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success downgrade from:12.15.0 to service version:12.14.0 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-upgrade-cmpv- namespace: ns-pkuvx spec: clusterName: postgres-cyetms upgrade: components: - componentName: postgresql serviceVersion: 12.14.0 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-k8wxg created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-k8wxg ns-pkuvx Upgrade postgres-cyetms postgresql Running 0/2 May 28,2025 12:44 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-k8wxg ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:44 UTC+0800 check ops status done ops_status:postgres-cyetms-upgrade-cmpv-k8wxg ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:44 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-upgrade-cmpv-k8wxg --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-k8wxg patched `kbcli cluster delete-ops --name postgres-cyetms-upgrade-cmpv-k8wxg --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-upgrade-cmpv-k8wxg deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success upgrade from:12.14.0 to service version:12.14.1 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-upgrade-cmpv- namespace: ns-pkuvx spec: clusterName: postgres-cyetms upgrade: components: - componentName: postgresql serviceVersion: 12.14.1 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-6qcjj created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-6qcjj ns-pkuvx Upgrade postgres-cyetms postgresql Running 0/2 May 28,2025 12:45 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-6qcjj ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:45 UTC+0800 check ops status done ops_status:postgres-cyetms-upgrade-cmpv-6qcjj ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:45 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-upgrade-cmpv-6qcjj --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-6qcjj patched `kbcli cluster delete-ops --name postgres-cyetms-upgrade-cmpv-6qcjj --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-upgrade-cmpv-6qcjj deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success upgrade from:12.14.1 to service version:12.15.0 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-upgrade-cmpv- namespace: ns-pkuvx spec: clusterName: postgres-cyetms upgrade: components: - componentName: postgresql serviceVersion: 12.15.0 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-rk8mg created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-rk8mg ns-pkuvx Upgrade postgres-cyetms postgresql Running 0/2 May 28,2025 12:47 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-rk8mg ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:47 UTC+0800 check ops status done ops_status:postgres-cyetms-upgrade-cmpv-rk8mg ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:47 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-upgrade-cmpv-rk8mg --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-rk8mg patched `kbcli cluster delete-ops --name postgres-cyetms-upgrade-cmpv-rk8mg --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-upgrade-cmpv-rk8mg deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success downgrade from:12.15.0 to service version:12.14.1 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-upgrade-cmpv- namespace: ns-pkuvx spec: clusterName: postgres-cyetms upgrade: components: - componentName: postgresql serviceVersion: 12.14.1 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-glcr7 created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-glcr7 ns-pkuvx Upgrade postgres-cyetms postgresql Running 0/2 May 28,2025 12:49 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-0;secondary: postgres-cyetms-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-glcr7 ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:49 UTC+0800 check ops status done ops_status:postgres-cyetms-upgrade-cmpv-glcr7 ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:49 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-upgrade-cmpv-glcr7 --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-glcr7 patched `kbcli cluster delete-ops --name postgres-cyetms-upgrade-cmpv-glcr7 --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-upgrade-cmpv-glcr7 deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cmpv service version downgrade downgrade from:12.14.1 to service version:12.14.0 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-upgrade-cmpv- namespace: ns-pkuvx spec: clusterName: postgres-cyetms upgrade: components: - componentName: postgresql serviceVersion: 12.14.0 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-hfwgz created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-hfwgz ns-pkuvx Upgrade postgres-cyetms Running -/- May 28,2025 12:50 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql Delete Updating May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-upgrade-cmpv-hfwgz ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:50 UTC+0800 check ops status done ops_status:postgres-cyetms-upgrade-cmpv-hfwgz ns-pkuvx Upgrade postgres-cyetms postgresql Succeed 2/2 May 28,2025 12:50 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-upgrade-cmpv-hfwgz --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-upgrade-cmpv-hfwgz patched `kbcli cluster delete-ops --name postgres-cyetms-upgrade-cmpv-hfwgz --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-upgrade-cmpv-hfwgz deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update postgres-cyetms --termination-policy=WipeOut --namespace ns-pkuvx ` cluster.apps.kubeblocks.io/postgres-cyetms updated check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql WipeOut Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-3-36.us-west-2.compute.internal/172.31.3.36 May 28,2025 12:22 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done cluster pg-basebackup backup `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.name***"` `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.namespace***"` `kubectl get secrets kb-backuprepo-mb4vp -n kb-tgili -o jsonpath="***.data.accessKeyId***"` `kubectl get secrets kb-backuprepo-mb4vp -n kb-tgili -o jsonpath="***.data.secretAccessKey***"` KUBEBLOCKS NAMESPACE:kb-tgili get kubeblocks namespace done `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-tgili -o jsonpath="***.items[0].data.root-user***"` `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-tgili -o jsonpath="***.items[0].data.root-password***"` minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-tgili.svc.cluster.local:9000 list minio bucket kbcli-test `echo 'mc config host add minioserver http://kbcli-test-minio.kb-tgili.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-5f4dfb568b-4g59b --namespace kb-tgili -- bash` Unable to use a TTY - input is not a terminal or the right kind of file list minio bucket done default backuprepo:backuprepo-kbcli-test exists `kbcli cluster backup postgres-cyetms --method pg-basebackup --namespace ns-pkuvx ` Backup backup-ns-pkuvx-postgres-cyetms-20250528125217 created successfully, you can view the progress: kbcli cluster list-backups --name=backup-ns-pkuvx-postgres-cyetms-20250528125217 -n ns-pkuvx check backup status `kbcli cluster list-backups postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-pkuvx-postgres-cyetms-20250528125217 ns-pkuvx postgres-cyetms pg-basebackup Running Delete May 28,2025 12:52 UTC+0800 backup_status:postgres-cyetms-pg-basebackup-Running backup_status:postgres-cyetms-pg-basebackup-Running backup_status:postgres-cyetms-pg-basebackup-Running check backup status done backup_status:backup-ns-pkuvx-postgres-cyetms-20250528125217 ns-pkuvx postgres-cyetms pg-basebackup Completed 10908050 13s Delete May 28,2025 12:52 UTC+0800 May 28,2025 12:52 UTC+0800 cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-cyetms-backup" not found `kbcli cluster describe-backup --names backup-ns-pkuvx-postgres-cyetms-20250528125217 --namespace ns-pkuvx ` Name: backup-ns-pkuvx-postgres-cyetms-20250528125217 Cluster: postgres-cyetms Namespace: ns-pkuvx Spec: Method: pg-basebackup Policy Name: postgres-cyetms-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-pkuvx-postgres-cyetms-20250528125217-9b34 TargetPodName: postgres-cyetms-postgresql-0 Phase: Completed Start Time: May 28,2025 12:52 UTC+0800 Completion Time: May 28,2025 12:52 UTC+0800 Status: Phase: Completed Total Size: 10908050 ActionSet Name: postgresql-basebackup Repository: backuprepo-kbcli-test Duration: 13s Start Time: May 28,2025 12:52 UTC+0800 Completion Time: May 28,2025 12:52 UTC+0800 Path: /ns-pkuvx/postgres-cyetms-7973eee7-f544-452f-8993-1827f6b799c2/postgresql/backup-ns-pkuvx-postgres-cyetms-20250528125217 Time Range Start: May 28,2025 12:52 UTC+0800 Time Range End: May 28,2025 12:52 UTC+0800 Warning Events: `kbcli cluster restore postgres-cyetms-backup --backup backup-ns-pkuvx-postgres-cyetms-20250528125217 --namespace ns-pkuvx ` Cluster postgres-cyetms-backup created check cluster status `kbcli cluster list postgres-cyetms-backup --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms-backup ns-pkuvx postgresql WipeOut Creating May 28,2025 12:52 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms-backup --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-backup-postgresql-0 ns-pkuvx postgres-cyetms-backup postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-12-169.us-west-2.compute.internal/172.31.12.169 May 28,2025 12:52 UTC+0800 postgres-cyetms-backup-postgresql-1 ns-pkuvx postgres-cyetms-backup postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-9-41.us-west-2.compute.internal/172.31.9.41 May 28,2025 12:52 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-backup-postgresql-1;secondary: postgres-cyetms-backup-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-backup-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kbcli cluster describe-backup --names backup-ns-pkuvx-postgres-cyetms-20250528125217 --namespace ns-pkuvx ` Name: backup-ns-pkuvx-postgres-cyetms-20250528125217 Cluster: postgres-cyetms Namespace: ns-pkuvx Spec: Method: pg-basebackup Policy Name: postgres-cyetms-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-pkuvx-postgres-cyetms-20250528125217-9b34 TargetPodName: postgres-cyetms-postgresql-0 Phase: Completed Start Time: May 28,2025 12:52 UTC+0800 Completion Time: May 28,2025 12:52 UTC+0800 Status: Phase: Completed Total Size: 10908050 ActionSet Name: postgresql-basebackup Repository: backuprepo-kbcli-test Duration: 13s Start Time: May 28,2025 12:52 UTC+0800 Completion Time: May 28,2025 12:52 UTC+0800 Path: /ns-pkuvx/postgres-cyetms-7973eee7-f544-452f-8993-1827f6b799c2/postgresql/backup-ns-pkuvx-postgres-cyetms-20250528125217 Time Range Start: May 28,2025 12:52 UTC+0800 Time Range End: May 28,2025 12:52 UTC+0800 Warning Events: cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-cyetms-backup-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: extension "vector" already exists Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | echo '\dx;' | kubectl exec -it postgres-cyetms-backup-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) connect cluster Success delete cluster postgres-cyetms-backup `kbcli cluster delete postgres-cyetms-backup --auto-approve --namespace ns-pkuvx ` Cluster postgres-cyetms-backup deleted pod_info:postgres-cyetms-backup-postgresql-0 4/4 Terminating 0 118s postgres-cyetms-backup-postgresql-1 4/4 Terminating 0 118s pod_info:postgres-cyetms-backup-postgresql-0 4/4 Terminating 0 2m18s postgres-cyetms-backup-postgresql-1 4/4 Terminating 0 2m18s No resources found in ns-pkuvx namespace. delete cluster pod done No resources found in ns-pkuvx namespace. check cluster resource non-exist OK: pvc No resources found in ns-pkuvx namespace. delete cluster done No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. cluster rebulid instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: postgres-cyetms-rebuildinstance- namespace: ns-pkuvx spec: type: RebuildInstance clusterName: postgres-cyetms force: true rebuildFrom: - componentName: postgresql instances: - name: postgres-cyetms-postgresql-0 backupName: backup-ns-pkuvx-postgres-cyetms-20250528125217 inPlace: true check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_postgres-cyetms.yaml` opsrequest.operations.kubeblocks.io/postgres-cyetms-rebuildinstance-br4xg created create test_ops_cluster_postgres-cyetms.yaml Success `rm -rf test_ops_cluster_postgres-cyetms.yaml` check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Running 0/1 May 28,2025 12:55 UTC+0800 check ops status done ops_status:postgres-cyetms-rebuildinstance-br4xg ns-pkuvx RebuildInstance postgres-cyetms postgresql Succeed 1/1 May 28,2025 12:55 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-rebuildinstance-br4xg --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-rebuildinstance-br4xg patched `kbcli cluster delete-ops --name postgres-cyetms-rebuildinstance-br4xg --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-rebuildinstance-br4xg deleted check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql WipeOut Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-12-169.us-west-2.compute.internal/172.31.12.169 May 28,2025 12:56 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-pkuvx-postgres-cyetms-20250528125217 --namespace ns-pkuvx ` backup.dataprotection.kubeblocks.io/backup-ns-pkuvx-postgres-cyetms-20250528125217 patched `kbcli cluster delete-backup postgres-cyetms --name backup-ns-pkuvx-postgres-cyetms-20250528125217 --force --auto-approve --namespace ns-pkuvx ` Backup backup-ns-pkuvx-postgres-cyetms-20250528125217 deleted No opsrequests found in ns-pkuvx namespace. `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-cyetms ` `kubectl get backupschedule postgres-cyetms-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched cluster pg-basebackup backup `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.name***"` `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.namespace***"` `kubectl get secrets kb-backuprepo-mb4vp -n kb-tgili -o jsonpath="***.data.accessKeyId***"` `kubectl get secrets kb-backuprepo-mb4vp -n kb-tgili -o jsonpath="***.data.secretAccessKey***"` KUBEBLOCKS NAMESPACE:kb-tgili get kubeblocks namespace done `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-tgili -o jsonpath="***.items[0].data.root-user***"` `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-tgili -o jsonpath="***.items[0].data.root-password***"` minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-tgili.svc.cluster.local:9000 list minio bucket kbcli-test `echo 'mc config host add minioserver http://kbcli-test-minio.kb-tgili.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-5f4dfb568b-4g59b --namespace kb-tgili -- bash` Unable to use a TTY - input is not a terminal or the right kind of file list minio bucket done default backuprepo:backuprepo-kbcli-test exists `kbcli cluster backup postgres-cyetms --method pg-basebackup --namespace ns-pkuvx ` Backup backup-ns-pkuvx-postgres-cyetms-20250528125811 created successfully, you can view the progress: kbcli cluster list-backups --name=backup-ns-pkuvx-postgres-cyetms-20250528125811 -n ns-pkuvx check backup status `kbcli cluster list-backups postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION 7973eee7-postgres-cyetms-postg-archive-wal ns-pkuvx postgres-cyetms archive-wal Running(AvailablePods: 1) Delete May 28,2025 12:58 UTC+0800 backup-ns-pkuvx-postgres-cyetms-20250528125811 ns-pkuvx postgres-cyetms pg-basebackup Running Delete May 28,2025 12:58 UTC+0800 backup_status:postgres-cyetms-pg-basebackup-Running backup_status:postgres-cyetms-pg-basebackup-Running backup_status:postgres-cyetms-pg-basebackup-Running backup_status:postgres-cyetms-pg-basebackup-Running backup_status:postgres-cyetms-pg-basebackup-Running check backup status done backup_status:backup-ns-pkuvx-postgres-cyetms-20250528125811 ns-pkuvx postgres-cyetms pg-basebackup Completed 10921273 29s Delete May 28,2025 12:58 UTC+0800 May 28,2025 12:58 UTC+0800 `create table if not exists msg(id SERIAL PRIMARY KEY, msg text, time timestamp);insert into msg (msg, time) values ('kbcli-test-data-cyetms0', now());` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file CREATE TABLE INSERT 0 1 `insert into msg (msg, time) values ('kbcli-test-data-cyetms1', now());` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file INSERT 0 1 Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file pg_switch_wal --------------- 0/142E07A8 (1 row) `insert into msg (msg, time) values ('kbcli-test-data-cyetms2', now());` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file INSERT 0 1 Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file pg_switch_wal --------------- 0/15000140 (1 row) checking recoverable time 1 recoverable time:May 28,2025 12:58:49 UTC+0800 `insert into msg (msg, time) values ('kbcli-test-data-cyetms4', now());` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file INSERT 0 1 Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file pg_switch_wal --------------- 0/16000108 (1 row) check recoverable time 1 done recoverable time:May 28,2025 12:59:00 UTC+0800 cluster restore-to-time backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-cyetms-backup" not found `kbcli cluster restore postgres-cyetms-backup --backup 7973eee7-postgres-cyetms-postg-archive-wal --restore-to-time "May 28,2025 12:59:00 UTC+0800" --namespace ns-pkuvx ` Cluster postgres-cyetms-backup created check cluster status `kbcli cluster list postgres-cyetms-backup --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms-backup ns-pkuvx postgresql WipeOut Creating May 28,2025 12:59 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating [Error] check cluster status timeout --------------------------------------get cluster postgres-cyetms-backup yaml-------------------------------------- `kubectl get cluster postgres-cyetms-backup -o yaml --namespace ns-pkuvx ` apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: '[***"name":"postgres-cyetms-backup","type":"Restore"***]' kubeblocks.io/restore-from-backup: '***"postgresql":***"doReadyRestoreAfterClusterRunning":"false","name":"7973eee7-postgres-cyetms-postg-archive-wal","namespace":"ns-pkuvx","restoreTime":"2025-05-28T04:59:00Z","volumeRestorePolicy":"Parallel"***' creationTimestamp: "2025-05-28T04:59:24Z" finalizers: - cluster.kubeblocks.io/finalizer generation: 1 labels: clusterdefinition.kubeblocks.io/name: postgresql name: postgres-cyetms-backup namespace: ns-pkuvx resourceVersion: "115977" uid: 97921ca0-efc9-447e-b60d-f4193f9a96ef spec: clusterDef: postgresql componentSpecs: - annotations: kubeblocks.io/restart: "2025-05-28T03:48:50Z" componentDef: postgresql-12-1.0.0-alpha.0 disableExporter: true labels: apps.kubeblocks.postgres.patroni/scope: postgres-cyetms-postgresql name: postgresql replicas: 2 resources: limits: cpu: 200m memory: 644245094400m requests: cpu: 200m memory: 644245094400m serviceVersion: 12.14.0 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi terminationPolicy: WipeOut topology: replication status: components: postgresql: phase: Creating conditions: - lastTransitionTime: "2025-05-28T04:59:24Z" message: 'The operator has started the provisioning of Cluster: postgres-cyetms-backup' observedGeneration: 1 reason: PreCheckSucceed status: "True" type: ProvisioningStarted - lastTransitionTime: "2025-05-28T04:59:24Z" message: Successfully applied for resources observedGeneration: 1 reason: ApplyResourcesSucceed status: "True" type: ApplyResources observedGeneration: 1 phase: Creating ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe cluster postgres-cyetms-backup-------------------------------------- `kubectl describe cluster postgres-cyetms-backup --namespace ns-pkuvx ` Name: postgres-cyetms-backup Namespace: ns-pkuvx Labels: clusterdefinition.kubeblocks.io/name=postgresql Annotations: kubeblocks.io/crd-api-version: apps.kubeblocks.io/v1 kubeblocks.io/ops-request: [***"name":"postgres-cyetms-backup","type":"Restore"***] kubeblocks.io/restore-from-backup: ***"postgresql":***"doReadyRestoreAfterClusterRunning":"false","name":"7973eee7-postgres-cyetms-postg-archive-wal","namespace":"ns-pkuvx","res... API Version: apps.kubeblocks.io/v1 Kind: Cluster Metadata: Creation Timestamp: 2025-05-28T04:59:24Z Finalizers: cluster.kubeblocks.io/finalizer Generation: 1 Resource Version: 115977 UID: 97921ca0-efc9-447e-b60d-f4193f9a96ef Spec: Cluster Def: postgresql Component Specs: Annotations: kubeblocks.io/restart: 2025-05-28T03:48:50Z Component Def: postgresql-12-1.0.0-alpha.0 Disable Exporter: true Labels: apps.kubeblocks.postgres.patroni/scope: postgres-cyetms-postgresql Name: postgresql Replicas: 2 Resources: Limits: Cpu: 200m Memory: 644245094400m Requests: Cpu: 200m Memory: 644245094400m Service Version: 12.14.0 Volume Claim Templates: Name: data Spec: Access Modes: ReadWriteOnce Resources: Requests: Storage: 5Gi Termination Policy: WipeOut Topology: replication Status: Components: Postgresql: Phase: Creating Conditions: Last Transition Time: 2025-05-28T04:59:24Z Message: The operator has started the provisioning of Cluster: postgres-cyetms-backup Observed Generation: 1 Reason: PreCheckSucceed Status: True Type: ProvisioningStarted Last Transition Time: 2025-05-28T04:59:24Z Message: Successfully applied for resources Observed Generation: 1 Reason: ApplyResourcesSucceed Status: True Type: ApplyResources Observed Generation: 1 Phase: Creating Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal PreCheckSucceed 8m6s cluster-controller The operator has started the provisioning of Cluster: postgres-cyetms-backup Normal ApplyResourcesSucceed 8m6s cluster-controller Successfully applied for resources Warning ReconcileBackupPolicyFail 8m6s backup-policy-driver-controller failed to reconcile: Operation cannot be fulfilled on backuppolicies.dataprotection.kubeblocks.io "postgres-cyetms-backup-postgresql-backup-policy": the object has been modified; please apply your changes to the latest version and try again Normal ClusterComponentPhaseTransition 7m33s (x2 over 7m33s) cluster-controller cluster component postgresql is Creating ------------------------------------------------------------------------------------------------------------------ check pod status `kbcli cluster list-instances postgres-cyetms-backup --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-backup-postgresql-0 ns-pkuvx postgres-cyetms-backup postgresql Running us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-4-134.us-west-2.compute.internal/172.31.4.134 May 28,2025 12:59 UTC+0800 postgres-cyetms-backup-postgresql-1 ns-pkuvx postgres-cyetms-backup postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-12-169.us-west-2.compute.internal/172.31.12.169 May 28,2025 12:59 UTC+0800 check pod status done `select * from msg;` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file id | msg | time ----+-------------------------+---------------------------- 1 | kbcli-test-data-cyetms0 | 2025-05-28 04:58:45.403389 2 | kbcli-test-data-cyetms1 | 2025-05-28 04:58:49.609921 (2 rows) Point-In-Time Recovery Success `kubectl get secrets -l app.kubernetes.io/instance=postgres-cyetms` set secret: postgres-cyetms-postgresql-account-postgres `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.username***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.password***"` `kubectl get secrets postgres-cyetms-postgresql-account-postgres -o jsonpath="***.data.port***"` DB_USERNAME:postgres;DB_PASSWORD:G18jFmD652;DB_PORT:5432;DB_DATABASE:postgres `echo 'DROP TABLE msg;' | kubectl exec -it postgres-cyetms-postgresql-1 -n default -- psql -U postgres ` Error from server (NotFound): pods "postgres-cyetms-postgresql-1" not found `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-cyetms ` `kubectl get backupschedule postgres-cyetms-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-cyetms-backup-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: extension "vector" already exists Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | echo '\dx;' | kubectl exec -it postgres-cyetms-backup-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) connect cluster Success delete cluster postgres-cyetms-backup `kbcli cluster delete postgres-cyetms-backup --auto-approve --namespace ns-pkuvx ` Cluster postgres-cyetms-backup deleted pod_info:postgres-cyetms-backup-postgresql-0 3/4 Terminating 0 8m4s postgres-cyetms-backup-postgresql-1 4/4 Terminating 0 8m4s pod_info:postgres-cyetms-backup-postgresql-0 3/4 Terminating 0 8m25s postgres-cyetms-backup-postgresql-1 4/4 Terminating 0 8m25s No resources found in ns-pkuvx namespace. delete cluster pod done No resources found in ns-pkuvx namespace. check cluster resource non-exist OK: pvc No resources found in ns-pkuvx namespace. delete cluster done No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups 7973eee7-postgres-cyetms-postg-archive-wal --namespace ns-pkuvx ` backup.dataprotection.kubeblocks.io/7973eee7-postgres-cyetms-postg-archive-wal patched `kbcli cluster delete-backup postgres-cyetms --name 7973eee7-postgres-cyetms-postg-archive-wal --force --auto-approve --namespace ns-pkuvx ` Backup 7973eee7-postgres-cyetms-postg-archive-wal deleted `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-pkuvx-postgres-cyetms-20250528125811 --namespace ns-pkuvx ` backup.dataprotection.kubeblocks.io/backup-ns-pkuvx-postgres-cyetms-20250528125811 patched `kbcli cluster delete-backup postgres-cyetms --name backup-ns-pkuvx-postgres-cyetms-20250528125811 --force --auto-approve --namespace ns-pkuvx ` Backup backup-ns-pkuvx-postgres-cyetms-20250528125811 deleted `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-cyetms ` `kubectl get backupschedule postgres-cyetms-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched check backup status `kbcli cluster list-backups postgres-cyetms --namespace ns-pkuvx ` No backups found in ns-pkuvx namespace. `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-cyetms ` `kubectl get backupschedule postgres-cyetms-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backup_status:postgres-cyetms-pg-basebackup-Running backup_status:postgres-cyetms-pg-basebackup-Running check backup status done backup_status:postgres-cyetms-pg-basebackup-20250528050900 ns-pkuvx postgres-cyetms pg-basebackup Completed 9501375 22s Delete May 28,2025 13:09 UTC+0800 May 28,2025 13:09 UTC+0800 Jun 04,2025 13:09 UTC+0800 `kubectl get backupschedule -l app.kubernetes.io/instance=postgres-cyetms ` `kubectl get backupschedule postgres-cyetms-postgresql-backup-schedule -ojsonpath='***.spec.schedules[*].backupMethod***' ` backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) backupschedule.dataprotection.kubeblocks.io/postgres-cyetms-postgresql-backup-schedule patched (no change) cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-cyetms-backup" not found `kbcli cluster describe-backup --names postgres-cyetms-pg-basebackup-20250528050900 --namespace ns-pkuvx ` Name: postgres-cyetms-pg-basebackup-20250528050900 Cluster: postgres-cyetms Namespace: ns-pkuvx Spec: Method: pg-basebackup Policy Name: postgres-cyetms-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-postgres-cyetms-pg-basebackup-20250528050900-e459c3 TargetPodName: postgres-cyetms-postgresql-0 Phase: Completed Start Time: May 28,2025 13:09 UTC+0800 Completion Time: May 28,2025 13:09 UTC+0800 Status: Phase: Completed Total Size: 9501375 ActionSet Name: postgresql-basebackup Repository: backuprepo-kbcli-test Duration: 22s Expiration Time: Jun 04,2025 13:09 UTC+0800 Start Time: May 28,2025 13:09 UTC+0800 Completion Time: May 28,2025 13:09 UTC+0800 Path: /ns-pkuvx/postgres-cyetms-7973eee7-f544-452f-8993-1827f6b799c2/postgresql/postgres-cyetms-pg-basebackup-20250528050900 Time Range Start: May 28,2025 13:09 UTC+0800 Time Range End: May 28,2025 13:09 UTC+0800 Warning Events: `kbcli cluster restore postgres-cyetms-backup --backup postgres-cyetms-pg-basebackup-20250528050900 --namespace ns-pkuvx ` Cluster postgres-cyetms-backup created check cluster status `kbcli cluster list postgres-cyetms-backup --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms-backup ns-pkuvx postgresql WipeOut Creating May 28,2025 13:09 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms-backup --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-backup-postgresql-0 ns-pkuvx postgres-cyetms-backup postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-5-153.us-west-2.compute.internal/172.31.5.153 May 28,2025 13:10 UTC+0800 postgres-cyetms-backup-postgresql-1 ns-pkuvx postgres-cyetms-backup postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-4-134.us-west-2.compute.internal/172.31.4.134 May 28,2025 13:10 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-backup-postgresql-0;secondary: postgres-cyetms-backup-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-backup-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kbcli cluster describe-backup --names postgres-cyetms-pg-basebackup-20250528050900 --namespace ns-pkuvx ` Name: postgres-cyetms-pg-basebackup-20250528050900 Cluster: postgres-cyetms Namespace: ns-pkuvx Spec: Method: pg-basebackup Policy Name: postgres-cyetms-postgresql-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-postgres-cyetms-pg-basebackup-20250528050900-e459c3 TargetPodName: postgres-cyetms-postgresql-0 Phase: Completed Start Time: May 28,2025 13:09 UTC+0800 Completion Time: May 28,2025 13:09 UTC+0800 Status: Phase: Completed Total Size: 9501375 ActionSet Name: postgresql-basebackup Repository: backuprepo-kbcli-test Duration: 22s Expiration Time: Jun 04,2025 13:09 UTC+0800 Start Time: May 28,2025 13:09 UTC+0800 Completion Time: May 28,2025 13:09 UTC+0800 Path: /ns-pkuvx/postgres-cyetms-7973eee7-f544-452f-8993-1827f6b799c2/postgresql/postgres-cyetms-pg-basebackup-20250528050900 Time Range Start: May 28,2025 13:09 UTC+0800 Time Range End: May 28,2025 13:09 UTC+0800 Warning Events: cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-cyetms-backup-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: extension "vector" already exists Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | echo '\dx;' | kubectl exec -it postgres-cyetms-backup-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) connect cluster Success delete cluster postgres-cyetms-backup `kbcli cluster delete postgres-cyetms-backup --auto-approve --namespace ns-pkuvx ` Cluster postgres-cyetms-backup deleted pod_info:postgres-cyetms-backup-postgresql-0 4/4 Terminating 0 83s postgres-cyetms-backup-postgresql-1 4/4 Terminating 0 83s pod_info:postgres-cyetms-backup-postgresql-0 4/4 Terminating 0 104s postgres-cyetms-backup-postgresql-1 4/4 Terminating 0 104s No resources found in ns-pkuvx namespace. delete cluster pod done No resources found in ns-pkuvx namespace. check cluster resource non-exist OK: pvc No resources found in ns-pkuvx namespace. delete cluster done No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups postgres-cyetms-pg-basebackup-20250528050900 --namespace ns-pkuvx ` backup.dataprotection.kubeblocks.io/postgres-cyetms-pg-basebackup-20250528050900 patched `kbcli cluster delete-backup postgres-cyetms --name postgres-cyetms-pg-basebackup-20250528050900 --force --auto-approve --namespace ns-pkuvx ` Backup postgres-cyetms-pg-basebackup-20250528050900 deleted cluster volume-snapshot backup `kbcli cluster backup postgres-cyetms --method volume-snapshot --namespace ns-pkuvx ` Backup backup-ns-pkuvx-postgres-cyetms-20250528131226 created successfully, you can view the progress: kbcli cluster list-backups --name=backup-ns-pkuvx-postgres-cyetms-20250528131226 -n ns-pkuvx check backup status `kbcli cluster list-backups postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-pkuvx-postgres-cyetms-20250528131226 ns-pkuvx postgres-cyetms volume-snapshot Running Delete May 28,2025 13:12 UTC+0800 backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running backup_status:postgres-cyetms-volume-snapshot-Running check backup status done backup_status:backup-ns-pkuvx-postgres-cyetms-20250528131226 ns-pkuvx postgres-cyetms volume-snapshot Completed 5Gi 68s Delete May 28,2025 13:12 UTC+0800 May 28,2025 13:13 UTC+0800 cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "postgres-cyetms-backup" not found `kbcli cluster describe-backup --names backup-ns-pkuvx-postgres-cyetms-20250528131226 --namespace ns-pkuvx ` Name: backup-ns-pkuvx-postgres-cyetms-20250528131226 Cluster: postgres-cyetms Namespace: ns-pkuvx Spec: Method: volume-snapshot Policy Name: postgres-cyetms-postgresql-backup-policy Actions: createVolumeSnapshot-0: panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x2b1becf] goroutine 1 [running]: github.com/apecloud/kbcli/pkg/cmd/dataprotection.PrintBackupObjDescribe(0xc000820d80, 0xc0009222c8) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:480 +0x4cf github.com/apecloud/kbcli/pkg/cmd/dataprotection.DescribeBackups(0xc000820d80, ***0xc001495e50?, 0x18fd69b?, 0xc0014e2908?***) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:458 +0x125 github.com/apecloud/kbcli/pkg/cmd/cluster.describeBackups(0x0?, ***0xc000592dc0?, 0x0?, 0x4371d8f500000000?***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:204 +0x66 github.com/apecloud/kbcli/pkg/cmd/cluster.NewDescribeBackupCmd.func1(0xc0014a7508?, ***0xc000592dc0, 0x0, 0x4***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:195 +0xe5 github.com/spf13/cobra.(*Command).execute(0xc0014a7508, ***0xc000592900, 0x4, 0x4***) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xab1 github.com/spf13/cobra.(*Command).ExecuteC(0xc001018f08) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff github.com/spf13/cobra.(*Command).Execute(...) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041 k8s.io/component-base/cli.run(0xc001018f08) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:146 +0x290 k8s.io/component-base/cli.RunNoErrOutput(...) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:84 main.main() /home/runner/work/kbcli/kbcli/cmd/cli/main.go:31 +0x18 `kbcli cluster restore postgres-cyetms-backup --backup backup-ns-pkuvx-postgres-cyetms-20250528131226 --namespace ns-pkuvx ` Cluster postgres-cyetms-backup created check cluster status `kbcli cluster list postgres-cyetms-backup --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms-backup ns-pkuvx postgresql WipeOut Creating May 28,2025 13:13 UTC+0800 clusterdefinition.kubeblocks.io/name=postgresql cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms-backup --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-backup-postgresql-0 ns-pkuvx postgres-cyetms-backup postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-4-134.us-west-2.compute.internal/172.31.4.134 May 28,2025 13:13 UTC+0800 postgres-cyetms-backup-postgresql-1 ns-pkuvx postgres-cyetms-backup postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-5-153.us-west-2.compute.internal/172.31.5.153 May 28,2025 13:13 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-backup-postgresql-0;secondary: postgres-cyetms-backup-postgresql-1 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-backup-postgresql-0 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done `kbcli cluster describe-backup --names backup-ns-pkuvx-postgres-cyetms-20250528131226 --namespace ns-pkuvx ` Name: backup-ns-pkuvx-postgres-cyetms-20250528131226 Cluster: postgres-cyetms Namespace: ns-pkuvx Spec: Method: volume-snapshot Policy Name: postgres-cyetms-postgresql-backup-policy Actions: createVolumeSnapshot-0: panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x2b1becf] goroutine 1 [running]: github.com/apecloud/kbcli/pkg/cmd/dataprotection.PrintBackupObjDescribe(0xc0005f66c0, 0xc000d082c8) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:480 +0x4cf github.com/apecloud/kbcli/pkg/cmd/dataprotection.DescribeBackups(0xc0005f66c0, ***0xc0012703b0?, 0x18fd69b?, 0xc000fecfc8?***) /home/runner/work/kbcli/kbcli/pkg/cmd/dataprotection/backup.go:458 +0x125 github.com/apecloud/kbcli/pkg/cmd/cluster.describeBackups(0x0?, ***0xc0006c95c0?, 0x0?, 0x8b238f9700000000?***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:204 +0x66 github.com/apecloud/kbcli/pkg/cmd/cluster.NewDescribeBackupCmd.func1(0xc00143bb08?, ***0xc0006c95c0, 0x0, 0x4***) /home/runner/work/kbcli/kbcli/pkg/cmd/cluster/dataprotection.go:195 +0xe5 github.com/spf13/cobra.(*Command).execute(0xc00143bb08, ***0xc0006c9580, 0x4, 0x4***) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:989 +0xab1 github.com/spf13/cobra.(*Command).ExecuteC(0xc00095f508) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117 +0x3ff github.com/spf13/cobra.(*Command).Execute(...) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041 k8s.io/component-base/cli.run(0xc00095f508) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:146 +0x290 k8s.io/component-base/cli.RunNoErrOutput(...) /home/runner/go/pkg/mod/k8s.io/component-base@v0.29.2/cli/run.go:84 main.main() /home/runner/work/kbcli/kbcli/cmd/cli/main.go:31 +0x18 cluster connect `echo 'create extension vector;' | kubectl exec -it postgres-cyetms-backup-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file ERROR: extension "vector" already exists Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) `echo 'show max_connections;' | echo '\dx;' | kubectl exec -it postgres-cyetms-backup-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+----------------------------------------------------------- file_fdw | 1.0 | public | foreign-data wrapper for flat file access pg_auth_mon | 1.1 | public | monitor connection attempts per user pg_cron | 1.5 | pg_catalog | Job scheduler for PostgreSQL pg_stat_kcache | 2.2.3 | public | Kernel statistics gathering pg_stat_statements | 1.7 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language set_user | 3.0 | public | similar to SET ROLE but with added logging vector | 0.6.1 | public | vector data type and ivfflat and hnsw access methods (9 rows) connect cluster Success delete cluster postgres-cyetms-backup `kbcli cluster delete postgres-cyetms-backup --auto-approve --namespace ns-pkuvx ` Cluster postgres-cyetms-backup deleted pod_info:postgres-cyetms-backup-postgresql-0 4/4 Terminating 0 70s postgres-cyetms-backup-postgresql-1 4/4 Terminating 0 70s pod_info:postgres-cyetms-backup-postgresql-0 4/4 Terminating 0 91s postgres-cyetms-backup-postgresql-1 4/4 Terminating 0 91s No resources found in ns-pkuvx namespace. delete cluster pod done No resources found in ns-pkuvx namespace. check cluster resource non-exist OK: pvc No resources found in ns-pkuvx namespace. delete cluster done No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-pkuvx-postgres-cyetms-20250528131226 --namespace ns-pkuvx ` backup.dataprotection.kubeblocks.io/backup-ns-pkuvx-postgres-cyetms-20250528131226 patched `kbcli cluster delete-backup postgres-cyetms --name backup-ns-pkuvx-postgres-cyetms-20250528131226 --force --auto-approve --namespace ns-pkuvx ` Backup backup-ns-pkuvx-postgres-cyetms-20250528131226 deleted cluster list-logs `kbcli cluster list-logs postgres-cyetms --namespace ns-pkuvx ` No log files found. Error from server (NotFound): pods "postgres-cyetms-postgresql-1" not found cluster logs `kbcli cluster logs postgres-cyetms --tail 30 --namespace ns-pkuvx ` Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) 2025-05-28 05:12:03,978 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:12:13,816 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:12:23,816 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:12:26.125 UTC [42] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-05-28 05:12:33,899 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:12:43,825 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:12:53,817 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:12:56.150 UTC [42] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-05-28 05:13:03,944 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:13:13,825 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:13:23,831 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:13:26.177 UTC [42] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-05-28 05:13:34,002 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:13:43,815 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:13:53,816 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:13:56.194 UTC [42] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-05-28 05:14:03,977 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:14:13,818 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:14:23,819 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:14:26.205 UTC [42] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-05-28 05:14:33,935 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:14:43,818 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:14:53,820 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:14:56.232 UTC [42] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-05-28 05:15:03,887 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:15:13,821 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:15:23,817 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:15:26.242 UTC [42] LOG ***ticks: 0, maint: 0, retry: 0*** 2025-05-28 05:15:33,894 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock 2025-05-28 05:15:43,818 INFO: no action. I am (postgres-cyetms-postgresql-1), the leader with the lock cluster logs running `kbcli cluster logs postgres-cyetms --tail 30 --file-type=running --namespace ns-pkuvx ` ==> /home/postgres/pgdata/pgroot/data/log/postgresql-2025-05-28.csv <== 2025-05-28 05:15:42.037 GMT,"postgres","postgres",4687,"127.0.0.1:57406",68369bfe.124f,1,"BIND",2025-05-28 05:15:42 GMT,8/4817,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:42.046 GMT,"postgres","postgres",4687,"127.0.0.1:57406",68369bfe.124f,2,"idle",2025-05-28 05:15:42 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:42.995 GMT,"postgres","postgres",4689,"127.0.0.1:57410",68369bfe.1251,1,"BIND",2025-05-28 05:15:42 GMT,8/4822,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:43.002 GMT,"postgres","postgres",4689,"127.0.0.1:57410",68369bfe.1251,2,"idle",2025-05-28 05:15:42 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:43.803 GMT,"postgres","postgres",88,"[local]",68369636.58,449,"SELECT",2025-05-28 04:51:02 GMT,2/452,0,LOG,00000,"AUDIT: SESSION,449,1,READ,SELECT,,,""SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()"",",,,,,,,,,"Patroni" 2025-05-28 05:15:44.031 GMT,"postgres","postgres",4690,"127.0.0.1:57420",68369c00.1252,1,"BIND",2025-05-28 05:15:44 GMT,8/4825,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:44.041 GMT,"postgres","postgres",4690,"127.0.0.1:57420",68369c00.1252,2,"idle",2025-05-28 05:15:44 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:45.007 GMT,"postgres","postgres",4691,"127.0.0.1:57432",68369c00.1253,1,"BIND",2025-05-28 05:15:44 GMT,8/4828,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:45.025 GMT,"postgres","postgres",4691,"127.0.0.1:57432",68369c00.1253,2,"idle",2025-05-28 05:15:44 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:45.101 GMT,"postgres","postgres",88,"[local]",68369636.58,450,"SELECT",2025-05-28 04:51:02 GMT,2/453,0,LOG,00000,"AUDIT: SESSION,450,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni" 2025-05-28 05:15:45.953 GMT,"postgres","postgres",4698,"127.0.0.1:57444",68369c01.125a,1,"BIND",2025-05-28 05:15:45 GMT,8/4831,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:45.959 GMT,"postgres","postgres",4698,"127.0.0.1:57444",68369c01.125a,2,"idle",2025-05-28 05:15:45 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:47.017 GMT,"postgres","postgres",4700,"127.0.0.1:57450",68369c02.125c,1,"BIND",2025-05-28 05:15:46 GMT,8/4836,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:47.024 GMT,"postgres","postgres",4700,"127.0.0.1:57450",68369c02.125c,2,"idle",2025-05-28 05:15:46 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:47.998 GMT,"postgres","postgres",4701,"127.0.0.1:57460",68369c03.125d,1,"BIND",2025-05-28 05:15:47 GMT,8/4839,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:48.014 GMT,"postgres","postgres",4701,"127.0.0.1:57460",68369c03.125d,2,"idle",2025-05-28 05:15:47 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:49.028 GMT,"postgres","postgres",4702,"127.0.0.1:57470",68369c05.125e,1,"BIND",2025-05-28 05:15:49 GMT,8/4842,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:49.040 GMT,"postgres","postgres",4702,"127.0.0.1:57470",68369c05.125e,2,"idle",2025-05-28 05:15:49 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:50.013 GMT,"postgres","postgres",4704,"127.0.0.1:57478",68369c05.1260,1,"BIND",2025-05-28 05:15:49 GMT,9/283,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:50.026 GMT,"postgres","postgres",4704,"127.0.0.1:57478",68369c05.1260,2,"idle",2025-05-28 05:15:49 GMT,9/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:50.327 GMT,"postgres","postgres",88,"[local]",68369636.58,451,"SELECT",2025-05-28 04:51:02 GMT,2/454,0,LOG,00000,"AUDIT: SESSION,451,1,READ,SELECT,,,""SELECT pg_catalog.pg_postmaster_start_time(), CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), pg_catalog.pg_last_xact_replay_timestamp(), pg_catalog.array_to_json(pg_catalog.array_agg(pg_catalog.row_to_json(ri))) FROM (SELECT (SELECT rolname FROM pg_catalog.pg_authid WHERE oid = usesysid) AS usename, application_name, client_addr, w.state, sync_state, sync_priority FROM pg_catalog.pg_stat_get_wal_senders() w, pg_catalog.pg_stat_get_activity(pid)) AS ri"",",,,,,,,,,"Patroni" 2025-05-28 05:15:51.042 GMT,"postgres","postgres",4711,"127.0.0.1:57486",68369c07.1267,1,"BIND",2025-05-28 05:15:51 GMT,8/4847,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:51.051 GMT,"postgres","postgres",4711,"127.0.0.1:57486",68369c07.1267,2,"idle",2025-05-28 05:15:51 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:51.994 GMT,"postgres","postgres",4712,"127.0.0.1:46354",68369c07.1268,1,"BIND",2025-05-28 05:15:51 GMT,8/4850,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:52.013 GMT,"postgres","postgres",4712,"127.0.0.1:46354",68369c07.1268,2,"idle",2025-05-28 05:15:51 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:53.092 GMT,"postgres","postgres",4713,"127.0.0.1:46364",68369c09.1269,1,"BIND",2025-05-28 05:15:53 GMT,8/4853,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:53.112 GMT,"postgres","postgres",4713,"127.0.0.1:46364",68369c09.1269,2,"idle",2025-05-28 05:15:53 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" 2025-05-28 05:15:53.803 GMT,"postgres","postgres",88,"[local]",68369636.58,452,"SELECT",2025-05-28 04:51:02 GMT,2/455,0,LOG,00000,"AUDIT: SESSION,452,1,READ,SELECT,,,""SELECT CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE ('x' || pg_catalog.substr(pg_catalog.pg_walfile_name(pg_catalog.pg_current_wal_lsn()), 1, 8))::bit(32)::int END, CASE WHEN pg_catalog.pg_is_in_recovery() THEN 0 ELSE pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), '0/0')::bigint END, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_last_wal_replay_lsn(), '0/0')::bigint, pg_catalog.pg_wal_lsn_diff(COALESCE(pg_catalog.pg_last_wal_receive_lsn(), '0/0'), '0/0')::bigint, pg_catalog.pg_is_in_recovery() AND pg_catalog.pg_is_wal_replay_paused(), 0, CASE WHEN latest_end_lsn IS NULL THEN NULL ELSE received_tli END, slot_name, conninfo, NULL, 'on', '', NULL FROM pg_catalog.pg_stat_get_wal_receiver()"",",,,,,,,,,"Patroni" 2025-05-28 05:15:53.973 GMT,"postgres","postgres",4715,"127.0.0.1:46380",68369c09.126b,1,"BIND",2025-05-28 05:15:53 GMT,8/4858,0,LOG,00000,"AUDIT: SESSION,1,1,READ,SELECT,,,select pg_is_in_recovery();,",,,,,,,,,"" 2025-05-28 05:15:53.978 GMT,"postgres","postgres",4715,"127.0.0.1:46380",68369c09.126b,2,"idle",2025-05-28 05:15:53 GMT,8/0,0,LOG,08006,"could not receive data from client: Connection reset by peer",,,,,,,,,"" ==> /home/postgres/pgdata/pgroot/data/log/postgresql-2025-05-28.log <== 2025-05-28 04:17:07 GMT [83]: [10-1] 68368e42.53 0 LOG: ending log output to stderr 2025-05-28 04:17:07 GMT [83]: [11-1] 68368e42.53 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:18:37 GMT [86]: [10-1] 68368e9c.56 0 LOG: ending log output to stderr 2025-05-28 04:18:37 GMT [86]: [11-1] 68368e9c.56 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:22:44 GMT [60]: [10-1] 68368f93.3c 0 LOG: ending log output to stderr 2025-05-28 04:22:44 GMT [60]: [11-1] 68368f93.3c 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:24:24 GMT [121]: [10-1] 68368ff8.79 0 LOG: ending log output to stderr 2025-05-28 04:24:24 GMT [121]: [11-1] 68368ff8.79 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:27:41 GMT [63]: [10-1] 683690bc.3f 0 LOG: ending log output to stderr 2025-05-28 04:27:41 GMT [63]: [11-1] 683690bc.3f 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:39:40 GMT [2952]: [10-1] 6836938b.b88 0 LOG: ending log output to stderr 2025-05-28 04:39:40 GMT [2952]: [11-1] 6836938b.b88 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:43:31 GMT [3900]: [10-1] 68369472.f3c 0 LOG: ending log output to stderr 2025-05-28 04:43:31 GMT [3900]: [11-1] 68369472.f3c 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:43:55 GMT [73]: [10-1] 6836948a.49 0 LOG: ending log output to stderr 2025-05-28 04:43:55 GMT [73]: [11-1] 6836948a.49 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:44:47 GMT [61]: [10-1] 683694be.3d 0 LOG: ending log output to stderr 2025-05-28 04:44:47 GMT [61]: [11-1] 683694be.3d 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:46:28 GMT [656]: [10-1] 68369523.290 0 LOG: ending log output to stderr 2025-05-28 04:46:28 GMT [656]: [11-1] 68369523.290 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:47:27 GMT [78]: [10-1] 6836955f.4e 0 LOG: ending log output to stderr 2025-05-28 04:47:27 GMT [78]: [11-1] 6836955f.4e 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:48:17 GMT [67]: [10-1] 68369590.43 0 LOG: ending log output to stderr 2025-05-28 04:48:17 GMT [67]: [11-1] 68369590.43 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:49:40 GMT [625]: [10-1] 683695e3.271 0 LOG: ending log output to stderr 2025-05-28 04:49:40 GMT [625]: [11-1] 683695e3.271 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:50:05 GMT [72]: [10-1] 683695fc.48 0 LOG: ending log output to stderr 2025-05-28 04:50:05 GMT [72]: [11-1] 683695fc.48 0 HINT: Future log output will go to log destination "csvlog". 2025-05-28 04:51:01 GMT [72]: [10-1] 68369634.48 0 LOG: ending log output to stderr 2025-05-28 04:51:01 GMT [72]: [11-1] 68369634.48 0 HINT: Future log output will go to log destination "csvlog". LB_TYPE is set to: intranet cluster expose check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster expose postgres-cyetms --auto-approve --force=true --type intranet --enable false --components postgresql --role-selector primary --namespace ns-pkuvx ` OpsRequest postgres-cyetms-expose-zlk78 created successfully, you can view the progress: kbcli cluster describe-ops postgres-cyetms-expose-zlk78 -n ns-pkuvx check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-postgresql-backup-schedule-2hqlk ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Succeed -/- May 28,2025 12:58 UTC+0800 postgres-cyetms-postgresql-backup-schedule-mnw4v ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Succeed -/- May 28,2025 13:07 UTC+0800 postgres-cyetms-expose-zlk78 ns-pkuvx Expose postgres-cyetms postgresql Running 0/1 May 28,2025 13:15 UTC+0800 check cluster status `kbcli cluster list postgres-cyetms --show-labels --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS postgres-cyetms ns-pkuvx postgresql WipeOut Running May 28,2025 11:35 UTC+0800 app.kubernetes.io/instance=postgres-cyetms,clusterdefinition.kubeblocks.io/name=postgresql check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances postgres-cyetms --namespace ns-pkuvx ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME postgres-cyetms-postgresql-0 ns-pkuvx postgres-cyetms postgresql Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-12-169.us-west-2.compute.internal/172.31.12.169 May 28,2025 12:56 UTC+0800 postgres-cyetms-postgresql-1 ns-pkuvx postgres-cyetms postgresql Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:5Gi ip-172-31-13-226.us-west-2.compute.internal/172.31.13.226 May 28,2025 12:22 UTC+0800 check pod status done check cluster role check cluster role done primary: postgres-cyetms-postgresql-1;secondary: postgres-cyetms-postgresql-0 check cluster connect `echo '' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres` check cluster connect done check ops status `kbcli cluster list-ops postgres-cyetms --status all --namespace ns-pkuvx ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME postgres-cyetms-postgresql-backup-schedule-2hqlk ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Succeed -/- May 28,2025 12:58 UTC+0800 postgres-cyetms-postgresql-backup-schedule-mnw4v ns-pkuvx Reconfiguring postgres-cyetms postgresql,postgresql Succeed -/- May 28,2025 13:07 UTC+0800 postgres-cyetms-expose-zlk78 ns-pkuvx Expose postgres-cyetms postgresql Succeed 1/1 May 28,2025 13:15 UTC+0800 check ops status done ops_status:postgres-cyetms-expose-zlk78 ns-pkuvx Expose postgres-cyetms postgresql Succeed 1/1 May 28,2025 13:15 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations postgres-cyetms-expose-zlk78 --namespace ns-pkuvx ` opsrequest.operations.kubeblocks.io/postgres-cyetms-expose-zlk78 patched `kbcli cluster delete-ops --name postgres-cyetms-expose-zlk78 --force --auto-approve --namespace ns-pkuvx ` OpsRequest postgres-cyetms-expose-zlk78 deleted `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-1 --namespace ns-pkuvx -- psql -U postgres ` check data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster data consistent Success `echo 'SELECT value FROM tmp_table WHERE id = 1;' | kubectl exec -it postgres-cyetms-postgresql-0 --namespace ns-pkuvx -- psql -U postgres ` check readonly data: Defaulted container "postgresql" out of: postgresql, pgbouncer, kbagent, config-manager, pg-init-container (init), init-dbctl (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file value ------- gayfp (1 row) check cluster readonly data consistent Success check db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-1 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check db_client batch data Success check readonly db_client batch data count `echo "select count(*) from executions_loop_table;" | kubectl exec -it postgres-cyetms-postgresql-0 -n ns-pkuvx -- psql -U postgres -d executions_loop ` check readonly db_client batch data Success delete cluster postgres-cyetms `kbcli cluster delete postgres-cyetms --auto-approve --namespace ns-pkuvx ` Cluster postgres-cyetms deleted pod_info:postgres-cyetms-postgresql-0 4/4 Terminating 0 20m postgres-cyetms-postgresql-1 4/4 Terminating 8 (25m ago) 54m pod_info:postgres-cyetms-postgresql-0 4/4 Terminating 0 20m postgres-cyetms-postgresql-1 4/4 Terminating 8 (26m ago) 54m No resources found in ns-pkuvx namespace. delete cluster pod done No resources found in ns-pkuvx namespace. check cluster resource non-exist OK: pvc No resources found in ns-pkuvx namespace. delete cluster done No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. No resources found in ns-pkuvx namespace. Postgresql Test Suite All Done! --------------------------------------Postgresql (Topology = replication Replicas 2) Test Result-------------------------------------- [PASSED]|[Create]|[ComponentDefinition=postgresql-12-1.0.0-alpha.0;ComponentVersion=postgresql;ServiceVersion=12.14.0;]|[Description=Create a cluster with the specified component definition postgresql-12-1.0.0-alpha.0 and component version postgresql and service version 12.14.0] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[AddData]|[Values=gayfp]|[Description=Add data to the cluster] [PASSED]|[CheckAddDataReadonly]|[Values=gayfp;Role=Readonly]|[Description=Add data to the cluster readonly] [PASSED]|[Expose]|[Enable=true;TYPE=intranet;ComponentName=postgresql]|[Description=Expose Enable the intranet service with postgresql component] [PASSED]|[SwitchOver]|[ComponentName=postgresql]|[Description=SwitchOver the cluster specify component postgresql] [PASSED]|[Bench]|[ComponentName=postgresql]|[Description=Bench the cluster service with postgresql component] [PASSED]|[Bench]|[HostType=LB;ComponentName=postgresql]|[Description=Bench the cluster LB service with postgresql component] [PASSED]|[Failover]|[HA=Evicting Pod;ComponentName=postgresql]|[Description=Simulates conditions where pods evicting either due to node drained thereby testing the application's resilience to unavailability of some replicas due to evicting.] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[HorizontalScaling Out]|[ComponentName=postgresql]|[Description=HorizontalScaling Out the cluster specify component postgresql] [PASSED]|[HorizontalScaling In]|[ComponentName=postgresql]|[Description=HorizontalScaling In the cluster specify component postgresql] [PASSED]|[VerticalScaling]|[ComponentName=postgresql]|[Description=VerticalScaling the cluster specify component postgresql] [PASSED]|[Failover]|[HA=Network Bandwidth Failover;Durations=2m;ComponentName=postgresql]|[Description=] [PASSED]|[Reconfiguring]|[ComponentName=postgresql;max_connections=200]|[Description=Reconfiguring the cluster specify component postgresql set max_connections=200] [PASSED]|[Failover]|[HA=Connection Stress;ComponentName=postgresql]|[Description=Simulates conditions where pods experience connection stress either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Connection load.] [PASSED]|[Failover]|[HA=Network Duplicate;Durations=2m;ComponentName=postgresql]|[Description=Simulates network duplicate fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to duplicate network.] [PASSED]|[Failover]|[HA=Network Partition;Durations=2m;ComponentName=postgresql]|[Description=Simulates network partition fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to partition network.] [PASSED]|[Failover]|[HA=Full CPU;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience CPU full either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high CPU load.] [PASSED]|[Failover]|[HA=Time Offset;Durations=2m;ComponentName=postgresql]|[Description=Simulates a time offset scenario thereby testing the application's resilience to potential slowness/unavailability of some replicas due to time offset.] [PASSED]|[Failover]|[HA=Delete Pod;ComponentName=postgresql]|[Description=Simulates conditions where pods terminating forced/graceful thereby testing deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application.] [PASSED]|[Reconfiguring]|[ComponentName=postgresql;shared_buffers=512MB]|[Description=Reconfiguring the cluster specify component postgresql set shared_buffers=512MB] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[Failover]|[HA=OOM;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience OOM either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Memory load.] [PASSED]|[Failover]|[HA=Kill 1;ComponentName=postgresql]|[Description=Simulates conditions where process 1 killed either due to expected/undesired processes thereby testing the application's resilience to unavailability of some replicas due to abnormal termination signals.] [PASSED]|[Failover]|[HA=Pod Failure;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience failure for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to failure.] [PASSED]|[VolumeExpansion]|[ComponentName=postgresql]|[Description=VolumeExpansion the cluster specify component postgresql] [PASSED]|[Failover]|[HA=DNS Random;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience random IP addresses being returned by the DNS service for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to the DNS service returning random IP addresses.] [PASSED]|[Failover]|[HA=Network Delay;Durations=2m;ComponentName=postgresql]|[Description=Simulates network delay fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to delay network.] [PASSED]|[Failover]|[HA=DNS Error;Durations=2m;ComponentName=postgresql]|[Description=Simulates conditions where pods experience DNS service errors for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to DNS service errors.] [PASSED]|[Failover]|[HA=Network Corrupt Failover;Durations=2m;ComponentName=postgresql]|[Description=] [PASSED]|[Failover]|[HA=Network Loss Failover;Durations=2m;ComponentName=postgresql]|[Description=] [PASSED]|[Upgrade]|[ComponentName=postgresql;ComponentVersionFrom=12.14.0;ComponentVersionTo=12.15.0]|[Description=Upgrade the cluster specify component postgresql service version from 12.14.0 to 12.15.0] [PASSED]|[Upgrade]|[ComponentName=postgresql;ComponentVersionFrom=12.15.0;ComponentVersionTo=12.14.0]|[Description=Upgrade the cluster specify component postgresql service version from 12.15.0 to 12.14.0] [PASSED]|[Upgrade]|[ComponentName=postgresql;ComponentVersionFrom=12.14.0;ComponentVersionTo=12.14.1]|[Description=Upgrade the cluster specify component postgresql service version from 12.14.0 to 12.14.1] [PASSED]|[Upgrade]|[ComponentName=postgresql;ComponentVersionFrom=12.14.1;ComponentVersionTo=12.15.0]|[Description=Upgrade the cluster specify component postgresql service version from 12.14.1 to 12.15.0] [PASSED]|[Upgrade]|[ComponentName=postgresql;ComponentVersionFrom=12.15.0;ComponentVersionTo=12.14.1]|[Description=Upgrade the cluster specify component postgresql service version from 12.15.0 to 12.14.1] [PASSED]|[Upgrade]|[ComponentName=postgresql;ComponentVersionFrom=12.14.1;ComponentVersionTo=12.14.0]|[Description=Upgrade the cluster specify component postgresql service version from 12.14.1 to 12.14.0] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Backup]|[BackupMethod=pg-basebackup]|[Description=The cluster pg-basebackup Backup] [PASSED]|[Restore]|[BackupMethod=pg-basebackup]|[Description=The cluster pg-basebackup Restore] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=pg-basebackup]|[Description=Delete the pg-basebackup restore cluster] [PASSED]|[RebuildInstance]|[ComponentName=postgresql]|[Description=Rebuild the cluster instance specify component postgresql] [PASSED]|[Backup]|[BackupMethod=pg-basebackup]|[Description=The cluster pg-basebackup Backup] [PASSED]|[Restore To Time]|[BackupMethod=pg-basebackup]|[Description=The cluster pg-basebackup Restore To Time] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=pg-basebackup]|[Description=Delete the pg-basebackup restore cluster] [PASSED]|[Backup]|[Schedule=true;BackupMethod=pg-basebackup]|[Description=The cluster Schedule pg-basebackup Backup] [PASSED]|[Restore]|[Schedule=true;BackupMethod=pg-basebackup]|[Description=The cluster Schedule pg-basebackup Restore] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[Schedule=true;BackupMethod=pg-basebackup]|[Description=Delete the Schedule pg-basebackup restore cluster] [PASSED]|[Backup]|[BackupMethod=volume-snapshot]|[Description=The cluster volume-snapshot Backup] [PASSED]|[Restore]|[BackupMethod=volume-snapshot]|[Description=The cluster volume-snapshot Restore] [PASSED]|[Connect]|[ComponentName=postgresql]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=volume-snapshot]|[Description=Delete the volume-snapshot restore cluster] [PASSED]|[Expose]|[Disable=true;TYPE=intranet;ComponentName=postgresql]|[Description=Expose Disable the intranet service with postgresql component] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]