https://github.com/apecloud/apecloud-cd/actions/runs/21930235150 previous_version: kubeblocks_version:1.0.2 bash test/kbcli/test_kbcli_1.0.sh --type 6 --version 1.0.2 --service-version 8 --generate-output true --aws-access-key-id *** --aws-secret-access-key *** --jihulab-token *** --random-namespace true --region eastus --cloud-provider aks CURRENT_TEST_DIR:test/kbcli source commons files source engines files source kubeblocks files source kubedb files CLUSTER_NAME:  `kubectl get namespace | grep ns-gtubu `(B   `kubectl create namespace ns-gtubu`(B  namespace/ns-gtubu created create namespace ns-gtubu done(B download kbcli  `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)`(B   `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.2`(B  Your system is linux_amd64 Installing kbcli ... Downloading ... kbcli installed successfully. Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.2 done(B Kubernetes: v1.32.10 KubeBlocks: 1.0.2 kbcli: 1.0.2 Kubernetes Env: v1.32.10 check snapshot controller check snapshot controller done(B POD_RESOURCES: aks kb-default-sc found aks default-vsc found found default storage class: default (B KubeBlocks version is:1.0.2 skip upgrade KubeBlocks(B current KubeBlocks version: 1.0.2 check component definition set component name:mongodb set component version set component version:mongodb set service versions:8.0.17,7.0.28,6.0.27,5.0.29,4.4.29 set service versions sorted:4.4.29,5.0.29,6.0.27,7.0.28,8.0.17 set mongodb component definition set mongodb component definition mongodb-1.0.2 REPORT_COUNT 0:0 set replicas first:3,4.4.29|3,5.0.29|3,6.0.27|3,7.0.28|3,8.0.17 set replicas second max again:3,8.0.17 REPORT_COUNT 2:1 CLUSTER_TOPOLOGY:replicaset cluster definition topology: replicaset sharding topology replicaset found in cluster definition mongodb set mongodb component definition set mongodb component definition mongo-shard-1.0.2 LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 3 CLUSTER_NAME:mongodb-trwkwn pod_info: termination_policy:Delete create 3 replica Delete mongodb cluster check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: mongodb-1.0.2 by component version:mongodb apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: mongodb-trwkwn namespace: ns-gtubu spec: clusterDef: mongodb topology: replicaset terminationPolicy: Delete componentSpecs: - name: mongodb serviceVersion: 8.0.17 replicas: 3 resources: limits: cpu: 100m memory: 0.5Gi requests: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi  `kubectl apply -f test_create_mongodb-trwkwn.yaml`(B  cluster.apps.kubeblocks.io/mongodb-trwkwn created apply test_create_mongodb-trwkwn.yaml Success(B  `rm -rf test_create_mongodb-trwkwn.yaml`(B  check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Creating Feb 12,2026 15:18 UTC+0800 clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:18 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:19 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:21 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-0;secondary(B: mongodb-trwkwn-mongodb-1 mongodb-trwkwn-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash`(B  check cluster connect done(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check pod mongodb-trwkwn-mongodb-0 container_name mongodb exist password g300cV7275bHJW7t(B Container mongodb logs contain secret password:2026-02-12T07:20:12Z INFO MongoDB Create user: root, passwd: g300cV7275bHJW7t, roles: map[db:admin role:root](B describe cluster  `kbcli cluster describe mongodb-trwkwn --namespace ns-gtubu `(B  Name: mongodb-trwkwn Created Time: Feb 12,2026 15:18 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-gtubu mongodb replicaset Running Delete Endpoints: COMPONENT INTERNAL EXTERNAL mongodb mongodb-trwkwn-mongodb.ns-gtubu.svc.cluster.local:27017 mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017 mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME mongodb 8.0.17 mongodb-trwkwn-mongodb-0 primary Running 0 aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:18 UTC+0800 mongodb 8.0.17 mongodb-trwkwn-mongodb-1 secondary Running 0 aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:19 UTC+0800 mongodb 8.0.17 mongodb-trwkwn-mongodb-2 secondary Running 0 aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:21 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS mongodb 100m / 100m 512Mi / 512Mi data:3Gi default Images: COMPONENT COMPONENT-DEFINITION IMAGE mongodb mongodb-1.0.2 docker.io/apecloud/percona-server-mongodb:8.0.17 docker.io/apecloud/percona-backup-mongodb:2.12.0 docker.io/apecloud/mongodb_exporter:0.44.0 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-gtubu mongodb-trwkwn  `kbcli cluster label mongodb-trwkwn app.kubernetes.io/instance- --namespace ns-gtubu `(B  label "app.kubernetes.io/instance" not found.  `kbcli cluster label mongodb-trwkwn app.kubernetes.io/instance=mongodb-trwkwn --namespace ns-gtubu `(B   `kbcli cluster label mongodb-trwkwn --list --namespace ns-gtubu `(B  NAME NAMESPACE LABELS mongodb-trwkwn ns-gtubu app.kubernetes.io/instance=mongodb-trwkwn clusterdefinition.kubeblocks.io/name=mongodb label cluster app.kubernetes.io/instance=mongodb-trwkwn Success(B  `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=mongodb-trwkwn --namespace ns-gtubu `(B   `kbcli cluster label mongodb-trwkwn --list --namespace ns-gtubu `(B  NAME NAMESPACE LABELS mongodb-trwkwn ns-gtubu app.kubernetes.io/instance=mongodb-trwkwn case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=mongodb label cluster case.name=kbcli.test1 Success(B  `kbcli cluster label mongodb-trwkwn case.name=kbcli.test2 --overwrite --namespace ns-gtubu `(B   `kbcli cluster label mongodb-trwkwn --list --namespace ns-gtubu `(B  NAME NAMESPACE LABELS mongodb-trwkwn ns-gtubu app.kubernetes.io/instance=mongodb-trwkwn case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=mongodb label cluster case.name=kbcli.test2 Success(B  `kbcli cluster label mongodb-trwkwn case.name- --namespace ns-gtubu `(B   `kbcli cluster label mongodb-trwkwn --list --namespace ns-gtubu `(B  NAME NAMESPACE LABELS mongodb-trwkwn ns-gtubu app.kubernetes.io/instance=mongodb-trwkwn clusterdefinition.kubeblocks.io/name=mongodb delete cluster label case.name Success(B list-accounts on characterType mongodb is not supported yet cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo " echo \"rs.status()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  Current Mongosh Log ID: 698d7ff920daa495e98b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:19:58.935+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:20:05.348+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:20:05.348+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:20:05.348+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:20:05.348+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> { set: 'mongodb-trwkwn-mongodb', date: ISODate('2026-02-12T07:24:06.231Z'), myState: 1, term: Long('1'), syncSourceHost: '', syncSourceId: -1, heartbeatIntervalMillis: Long('2000'), majorityVoteCount: 2, writeMajorityCount: 2, votingMembersCount: 3, writableVotingMembersCount: 3, optimes: { lastCommittedOpTime: { ts: Timestamp({ t: 1770881044, i: 3 }), t: Long('1') }, lastCommittedWallTime: ISODate('2026-02-12T07:24:04.832Z'), readConcernMajorityOpTime: { ts: Timestamp({ t: 1770881044, i: 3 }), t: Long('1') }, appliedOpTime: { ts: Timestamp({ t: 1770881044, i: 3 }), t: Long('1') }, durableOpTime: { ts: Timestamp({ t: 1770881044, i: 3 }), t: Long('1') }, writtenOpTime: { ts: Timestamp({ t: 1770881044, i: 3 }), t: Long('1') }, lastAppliedWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastDurableWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastWrittenWallTime: ISODate('2026-02-12T07:24:04.832Z') }, lastStableRecoveryTimestamp: Timestamp({ t: 1770880987, i: 1 }), electionCandidateMetrics: { lastElectionReason: 'electionTimeout', lastElectionDate: ISODate('2026-02-12T07:20:07.133Z'), electionTerm: Long('1'), lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1770880806, i: 1 }), t: Long('-1') }, lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1770880806, i: 1 }), t: Long('-1') }, lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1770880806, i: 1 }), t: Long('-1') }, numVotesNeeded: 1, priorityAtElection: 2, electionTimeoutMillis: Long('10000'), newTermStartDate: ISODate('2026-02-12T07:20:07.448Z'), wMajorityWriteAvailabilityDate: ISODate('2026-02-12T07:20:07.642Z') }, members: [ { _id: 0, name: 'mongodb-trwkwn-mongodb-0.mongodb-trwkwn-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 248, optime: { ts: Timestamp({ t: 1770881044, i: 3 }), t: Long('1') }, optimeDate: ISODate('2026-02-12T07:24:04.000Z'), optimeWritten: { ts: Timestamp({ t: 1770881044, i: 3 }), t: Long('1') }, optimeWrittenDate: ISODate('2026-02-12T07:24:04.000Z'), lastAppliedWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastDurableWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastWrittenWallTime: ISODate('2026-02-12T07:24:04.832Z'), syncSourceHost: '', syncSourceId: -1, infoMessage: '', electionTime: Timestamp({ t: 1770880807, i: 1 }), electionDate: ISODate('2026-02-12T07:20:07.000Z'), configVersion: 5, configTerm: 1, self: true, lastHeartbeatMessage: '' }, { _id: 1, name: 'mongodb-trwkwn-mongodb-1.mongodb-trwkwn-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 171, optime: { ts: Timestamp({ t: 1770881044, i: 1 }), t: Long('1') }, optimeDurable: { ts: Timestamp({ t: 1770881044, i: 1 }), t: Long('1') }, optimeWritten: { ts: Timestamp({ t: 1770881044, i: 1 }), t: Long('1') }, optimeDate: ISODate('2026-02-12T07:24:04.000Z'), optimeDurableDate: ISODate('2026-02-12T07:24:04.000Z'), optimeWrittenDate: ISODate('2026-02-12T07:24:04.000Z'), lastAppliedWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastDurableWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastWrittenWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastHeartbeat: ISODate('2026-02-12T07:24:04.531Z'), lastHeartbeatRecv: ISODate('2026-02-12T07:24:04.630Z'), pingMs: Long('8'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-trwkwn-mongodb-0.mongodb-trwkwn-mongodb-headless.ns-gtubu.svc:27017', syncSourceId: 0, infoMessage: '', configVersion: 5, configTerm: 1 }, { _id: 2, name: 'mongodb-trwkwn-mongodb-2.mongodb-trwkwn-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 131, optime: { ts: Timestamp({ t: 1770881044, i: 2 }), t: Long('1') }, optimeDurable: { ts: Timestamp({ t: 1770881044, i: 2 }), t: Long('1') }, optimeWritten: { ts: Timestamp({ t: 1770881044, i: 2 }), t: Long('1') }, optimeDate: ISODate('2026-02-12T07:24:04.000Z'), optimeDurableDate: ISODate('2026-02-12T07:24:04.000Z'), optimeWrittenDate: ISODate('2026-02-12T07:24:04.000Z'), lastAppliedWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastDurableWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastWrittenWallTime: ISODate('2026-02-12T07:24:04.832Z'), lastHeartbeat: ISODate('2026-02-12T07:24:04.834Z'), lastHeartbeatRecv: ISODate('2026-02-12T07:24:06.035Z'), pingMs: Long('7'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-trwkwn-mongodb-1.mongodb-trwkwn-mongodb-headless.ns-gtubu.svc:27017', syncSourceId: 1, infoMessage: '', configVersion: 5, configTerm: 1 } ], ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1770881044, i: 3 }), signature: { hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0), keyId: Long('0') } }, operationTime: Timestamp({ t: 1770881044, i: 3 }) } mongodb-trwkwn-mongodb [direct: primary] admin> connect cluster Success(B insert batch data by db client  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-executionloop-mongodb-trwkwn --namespace ns-gtubu `(B   `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-mongodb-trwkwn namespace: ns-gtubu spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "mongodb-trwkwn-mongodb.ns-gtubu.svc.cluster.local" - "--user" - "root" - "--password" - "g300cV7275bHJW7t" - "--port" - "27017" - "--dbtype" - "mongodb" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never  `kubectl apply -f test-db-client-executionloop-mongodb-trwkwn.yaml`(B  pod/test-db-client-executionloop-mongodb-trwkwn created apply test-db-client-executionloop-mongodb-trwkwn.yaml Success(B  `rm -rf test-db-client-executionloop-mongodb-trwkwn.yaml`(B  check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 5s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 10s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 15s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 20s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 25s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 30s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 35s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 41s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 46s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 51s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 56s(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 1/1 Running 0 61s(B check pod test-db-client-executionloop-mongodb-trwkwn status done(B pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-trwkwn 0/1 Completed 0 66s(B check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:18 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:19 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:21 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-0;secondary(B: mongodb-trwkwn-mongodb-1 mongodb-trwkwn-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash`(B  check cluster connect done(B Inserted document: BsonObjectId{value=698d804f66ee6b716b545cb8} [ 46s ] executions total: 373 successful: 371 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d804f66ee6b716b545cb9} Inserted document: BsonObjectId{value=698d804f66ee6b716b545cba} Inserted document: BsonObjectId{value=698d804f66ee6b716b545cbb} Inserted document: BsonObjectId{value=698d804f66ee6b716b545cbc} Inserted document: BsonObjectId{value=698d804f66ee6b716b545cbd} Inserted document: BsonObjectId{value=698d804f66ee6b716b545cbe} Inserted document: BsonObjectId{value=698d805066ee6b716b545cbf} Inserted document: BsonObjectId{value=698d805066ee6b716b545cc0} Inserted document: BsonObjectId{value=698d805066ee6b716b545cc1} [ 47s ] executions total: 382 successful: 380 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805066ee6b716b545cc2} Inserted document: BsonObjectId{value=698d805066ee6b716b545cc3} Inserted document: BsonObjectId{value=698d805166ee6b716b545cc4} Inserted document: BsonObjectId{value=698d805166ee6b716b545cc5} Inserted document: BsonObjectId{value=698d805166ee6b716b545cc6} Inserted document: BsonObjectId{value=698d805166ee6b716b545cc7} Inserted document: BsonObjectId{value=698d805166ee6b716b545cc8} Inserted document: BsonObjectId{value=698d805166ee6b716b545cc9} [ 48s ] executions total: 390 successful: 388 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805166ee6b716b545cca} Inserted document: BsonObjectId{value=698d805166ee6b716b545ccb} Inserted document: BsonObjectId{value=698d805266ee6b716b545ccc} [ 49s ] executions total: 393 successful: 391 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805266ee6b716b545ccd} Inserted document: BsonObjectId{value=698d805366ee6b716b545cce} Inserted document: BsonObjectId{value=698d805366ee6b716b545ccf} Inserted document: BsonObjectId{value=698d805366ee6b716b545cd0} Inserted document: BsonObjectId{value=698d805366ee6b716b545cd1} [ 50s ] executions total: 398 successful: 396 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805366ee6b716b545cd2} Inserted document: BsonObjectId{value=698d805466ee6b716b545cd3} Inserted document: BsonObjectId{value=698d805466ee6b716b545cd4} Inserted document: BsonObjectId{value=698d805466ee6b716b545cd5} Inserted document: BsonObjectId{value=698d805466ee6b716b545cd6} Inserted document: BsonObjectId{value=698d805466ee6b716b545cd7} Inserted document: BsonObjectId{value=698d805466ee6b716b545cd8} Inserted document: BsonObjectId{value=698d805466ee6b716b545cd9} Inserted document: BsonObjectId{value=698d805466ee6b716b545cda} Inserted document: BsonObjectId{value=698d805466ee6b716b545cdb} Inserted document: BsonObjectId{value=698d805466ee6b716b545cdc} [ 51s ] executions total: 409 successful: 407 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805566ee6b716b545cdd} Inserted document: BsonObjectId{value=698d805566ee6b716b545cde} Inserted document: BsonObjectId{value=698d805566ee6b716b545cdf} Inserted document: BsonObjectId{value=698d805566ee6b716b545ce0} Inserted document: BsonObjectId{value=698d805566ee6b716b545ce1} [ 52s ] executions total: 414 successful: 412 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805666ee6b716b545ce2} Inserted document: BsonObjectId{value=698d805666ee6b716b545ce3} Inserted document: BsonObjectId{value=698d805666ee6b716b545ce4} Inserted document: BsonObjectId{value=698d805666ee6b716b545ce5} Inserted document: BsonObjectId{value=698d805766ee6b716b545ce6} Inserted document: BsonObjectId{value=698d805766ee6b716b545ce7} Inserted document: BsonObjectId{value=698d805766ee6b716b545ce8} Inserted document: BsonObjectId{value=698d805766ee6b716b545ce9} Inserted document: BsonObjectId{value=698d805766ee6b716b545cea} [ 53s ] executions total: 423 successful: 421 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805766ee6b716b545ceb} Inserted document: BsonObjectId{value=698d805766ee6b716b545cec} Inserted document: BsonObjectId{value=698d805766ee6b716b545ced} Inserted document: BsonObjectId{value=698d805766ee6b716b545cee} Inserted document: BsonObjectId{value=698d805866ee6b716b545cef} Inserted document: BsonObjectId{value=698d805866ee6b716b545cf0} Inserted document: BsonObjectId{value=698d805866ee6b716b545cf1} [ 54s ] executions total: 430 successful: 428 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805866ee6b716b545cf2} Inserted document: BsonObjectId{value=698d805866ee6b716b545cf3} Inserted document: BsonObjectId{value=698d805866ee6b716b545cf4} Inserted document: BsonObjectId{value=698d805866ee6b716b545cf5} Inserted document: BsonObjectId{value=698d805866ee6b716b545cf6} Inserted document: BsonObjectId{value=698d805966ee6b716b545cf7} Inserted document: BsonObjectId{value=698d805966ee6b716b545cf8} Inserted document: BsonObjectId{value=698d805966ee6b716b545cf9} Inserted document: BsonObjectId{value=698d805966ee6b716b545cfa} Inserted document: BsonObjectId{value=698d805966ee6b716b545cfb} [ 55s ] executions total: 440 successful: 438 failed: 2 disconnect: 1 Inserted document: BsonObjectId{value=698d805966ee6b716b545cfc} Inserted document: BsonObjectId{value=698d805966ee6b716b545cfd} [ 60s ] executions total: 442 successful: 440 failed: 2 disconnect: 1 Test Result: Total Executions: 442 Successful Executions: 440 Failed Executions: 2 Disconnection Counts: 1 Connection Information: Database Type: mongodb Host: mongodb-trwkwn-mongodb.ns-gtubu.svc.cluster.local Port: 27017 Database: Table: User: root Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 440  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pods test-db-client-executionloop-mongodb-trwkwn --namespace ns-gtubu `(B  pod/test-db-client-executionloop-mongodb-trwkwn patched (no change) pod "test-db-client-executionloop-mongodb-trwkwn" force deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.remove({}) ; db.col.insertOne({a:'jyjav'})\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  Current Mongosh Log ID: 698d809c22dc64734d8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:19:58.935+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:20:05.348+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:20:05.348+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:20:05.348+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:20:05.348+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> DeprecationWarning: Collection.remove() is deprecated. Use deleteOne, deleteMany, findOneAndDelete, or bulkWrite. { acknowledged: true, insertedId: ObjectId('698d80b922dc64734d8b79a2') } mongodb-trwkwn-mongodb [direct: primary] admin> add consistent data jyjav Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.remove({}) ; db.col.insertOne({a:'jyjav'})\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d80d2dd5167029a8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2026-02-12T07:21:46.103+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:21:52.849+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:21:52.849+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:21:52.849+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:21:52.849+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> DeprecationWarning: Collection.remove() is deprecated. Use deleteOne, deleteMany, findOneAndDelete, or bulkWrite. Uncaught MongoServerError[NotWritablePrimary]: not primary mongodb-trwkwn-mongodb [direct: secondary] admin> check add consistent data readonly Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `mongosh mongodb://root:g300cV7275bHJW7t@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local`(B  exec return msg:Current Mongosh Log ID: 698d8133f3da7a290c8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local/?directConnection=true&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:19:58.935+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:20:05.348+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:20:05.348+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:20:05.348+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:20:05.348+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] test> connect headlessEndpoints Success(B cluster does not need to check monitor currently check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:18 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:19 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:21 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-0;secondary(B: mongodb-trwkwn-mongodb-1 mongodb-trwkwn-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash`(B  check cluster connect done(B cluster mongodb scale-out cluster mongodb scale-out replicas: 5 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-out mongodb-trwkwn --auto-approve --force=true --components mongodb --replicas 2 --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-horizontalscaling-pj2zg created successfully, you can view the progress: kbcli cluster describe-ops mongodb-trwkwn-horizontalscaling-pj2zg -n ns-gtubu check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-horizontalscaling-pj2zg ns-gtubu HorizontalScaling mongodb-trwkwn mongodb Running -/- Feb 12,2026 15:29 UTC+0800 check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Updating Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:18 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:19 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:21 UTC+0800 mongodb-trwkwn-mongodb-3 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:29 UTC+0800 mongodb-trwkwn-mongodb-4 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:30 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-0;secondary(B: mongodb-trwkwn-mongodb-1 mongodb-trwkwn-mongodb-2 mongodb-trwkwn-mongodb-3 mongodb-trwkwn-mongodb-4  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-horizontalscaling-pj2zg ns-gtubu HorizontalScaling mongodb-trwkwn mongodb Succeed 2/2 Feb 12,2026 15:29 UTC+0800 check ops status done(B ops_status:mongodb-trwkwn-horizontalscaling-pj2zg ns-gtubu HorizontalScaling mongodb-trwkwn mongodb Succeed 2/2 Feb 12,2026 15:29 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-horizontalscaling-pj2zg --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-horizontalscaling-pj2zg patched  `kbcli cluster delete-ops --name mongodb-trwkwn-horizontalscaling-pj2zg --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-horizontalscaling-pj2zg deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d821a0c3b4381378b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:19:58.935+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:20:05.348+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:20:05.348+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:20:05.348+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:20:05.348+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8242b3610411828b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:21:46.103+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:21:52.849+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:21:52.849+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:21:52.849+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:21:52.849+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B cluster mongodb scale-in cluster mongodb scale-in replicas: 3 check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster scale-in mongodb-trwkwn --auto-approve --force=true --components mongodb --replicas 2 --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-horizontalscaling-nc9bf created successfully, you can view the progress: kbcli cluster describe-ops mongodb-trwkwn-horizontalscaling-nc9bf -n ns-gtubu check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-horizontalscaling-nc9bf ns-gtubu HorizontalScaling mongodb-trwkwn mongodb Running 0/2 Feb 12,2026 15:34 UTC+0800 check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:18 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:19 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:21 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-0;secondary(B: mongodb-trwkwn-mongodb-1 mongodb-trwkwn-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-horizontalscaling-nc9bf ns-gtubu HorizontalScaling mongodb-trwkwn mongodb Succeed 2/2 Feb 12,2026 15:34 UTC+0800 check ops status done(B ops_status:mongodb-trwkwn-horizontalscaling-nc9bf ns-gtubu HorizontalScaling mongodb-trwkwn mongodb Succeed 2/2 Feb 12,2026 15:34 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-horizontalscaling-nc9bf --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-horizontalscaling-nc9bf patched  `kbcli cluster delete-ops --name mongodb-trwkwn-horizontalscaling-nc9bf --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-horizontalscaling-nc9bf deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d82ceaa5e992e668b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:19:58.935+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:20:05.348+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:20:05.348+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:20:05.348+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:20:05.348+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d82f756fe4df2008b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:21:46.103+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:21:52.849+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:21:52.849+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:21:52.849+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:21:52.849+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B test failover (B check cluster status before cluster-failover- check cluster status done(B cluster_status:Running(B delete pod:mongodb-trwkwn-mongodb-0  `kubectl delete pod mongodb-trwkwn-mongodb-0 --force --namespace ns-gtubu `(B  pod "mongodb-trwkwn-mongodb-0" force deleted check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:37 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:19 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running primary 0 100m / 100m 512Mi / 512Mi data:3Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:21 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-2;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-1  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check failover pod name failover pod name:mongodb-trwkwn-mongodb-2 failover Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d83965e417635fc8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:21:46.103+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:21:52.849+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:21:52.849+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:21:52.849+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:21:52.849+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d83bce3b57434568b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2026-02-12T07:37:47.432+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:37:58.934+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:37:58.935+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:37:58.935+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:37:58.935+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B skip cluster Upgrade(B  `kubectl get pvc -l app.kubernetes.io/instance=mongodb-trwkwn,apps.kubeblocks.io/component-name=mongodb,apps.kubeblocks.io/vct-name=data --namespace ns-gtubu `(B  cluster volume-expand check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster volume-expand mongodb-trwkwn --auto-approve --force=true --components mongodb --volume-claim-templates data --storage 7Gi --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-volumeexpansion-mg6sz created successfully, you can view the progress: kbcli cluster describe-ops mongodb-trwkwn-volumeexpansion-mg6sz -n ns-gtubu check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-volumeexpansion-mg6sz ns-gtubu VolumeExpansion mongodb-trwkwn mongodb Running 0/3 Feb 12,2026 15:40 UTC+0800 check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Updating Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:37 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:19 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running primary 0 100m / 100m 512Mi / 512Mi data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:21 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-2;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-1  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-volumeexpansion-mg6sz ns-gtubu VolumeExpansion mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 15:40 UTC+0800 check ops status done(B ops_status:mongodb-trwkwn-volumeexpansion-mg6sz ns-gtubu VolumeExpansion mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 15:40 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-volumeexpansion-mg6sz --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-volumeexpansion-mg6sz patched  `kbcli cluster delete-ops --name mongodb-trwkwn-volumeexpansion-mg6sz --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-volumeexpansion-mg6sz deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d85f974d36d8b398b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:21:46.103+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:21:52.849+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:21:52.849+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:21:52.849+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:21:52.849+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8621cd916bdee78b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:21:05.492+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:21:12.456+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:21:12.456+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:21:12.456+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:21:12.456+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster vscale mongodb-trwkwn --auto-approve --force=true --components mongodb --cpu 200m --memory 0.6Gi --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-verticalscaling-82kcx created successfully, you can view the progress: kbcli cluster describe-ops mongodb-trwkwn-verticalscaling-82kcx -n ns-gtubu check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-verticalscaling-82kcx ns-gtubu VerticalScaling mongodb-trwkwn mongodb Running 0/3 Feb 12,2026 15:50 UTC+0800 check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Updating Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:52 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:51 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:53 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-1;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-verticalscaling-82kcx ns-gtubu VerticalScaling mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 15:50 UTC+0800 check ops status done(B ops_status:mongodb-trwkwn-verticalscaling-82kcx ns-gtubu VerticalScaling mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 15:50 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-verticalscaling-82kcx --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-verticalscaling-82kcx patched  `kbcli cluster delete-ops --name mongodb-trwkwn-verticalscaling-82kcx --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-verticalscaling-82kcx deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d873d51985a18b98b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:51:48.197+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:51:54.094+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:51:54.094+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:51:54.094+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:51:54.094+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d874bc5869fd0528b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2026-02-12T07:54:04.527+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:54:09.735+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:54:09.736+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:54:09.736+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:54:09.736+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B cluster stop check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster stop mongodb-trwkwn --auto-approve --force=true --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-stop-5qs5x created successfully, you can view the progress: kbcli cluster describe-ops mongodb-trwkwn-stop-5qs5x -n ns-gtubu check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-stop-5qs5x ns-gtubu Stop mongodb-trwkwn Pending -/- Feb 12,2026 15:55 UTC+0800 check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Stopping Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B cluster_status:Stopping(B check cluster status done(B cluster_status:Stopped(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done(B check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-stop-5qs5x ns-gtubu Stop mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 15:55 UTC+0800 check ops status done(B ops_status:mongodb-trwkwn-stop-5qs5x ns-gtubu Stop mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 15:55 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-stop-5qs5x --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-stop-5qs5x patched  `kbcli cluster delete-ops --name mongodb-trwkwn-stop-5qs5x --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-stop-5qs5x deleted cluster start check cluster status before ops check cluster status done(B cluster_status:Stopped(B  `kbcli cluster start mongodb-trwkwn --force=true --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-start-kvmjs created successfully, you can view the progress: kbcli cluster describe-ops mongodb-trwkwn-start-kvmjs -n ns-gtubu check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-start-kvmjs ns-gtubu Start mongodb-trwkwn mongodb Running 0/3 Feb 12,2026 15:56 UTC+0800 check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Updating Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:56 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:57 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:58 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-1;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-start-kvmjs ns-gtubu Start mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 15:56 UTC+0800 check ops status done(B ops_status:mongodb-trwkwn-start-kvmjs ns-gtubu Start mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 15:56 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-start-kvmjs --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-start-kvmjs patched  `kbcli cluster delete-ops --name mongodb-trwkwn-start-kvmjs --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-start-kvmjs deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d896d7c5a9f2db78b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:58:09.794+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:58:14.726+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:58:14.726+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:58:14.726+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:58:14.727+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d89793874b818868b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2026-02-12T08:03:12.857+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:03:18.841+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:03:18.842+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:03:18.842+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:03:18.842+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B test switchover(B cluster promote check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster promote mongodb-trwkwn --auto-approve --force=true --instance mongodb-trwkwn-mongodb-1 --candidate mongodb-trwkwn-mongodb-0 --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-switchover-bh6bl created successfully, you can view the progress: kbcli cluster describe-ops mongodb-trwkwn-switchover-bh6bl -n ns-gtubu check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-switchover-bh6bl ns-gtubu Switchover mongodb-trwkwn mongodb-trwkwn-mongodb Running 0/1 Feb 12,2026 16:04 UTC+0800 check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:56 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:57 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:58 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-1;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-switchover-bh6bl ns-gtubu Switchover mongodb-trwkwn mongodb-trwkwn-mongodb Succeed 1/1 Feb 12,2026 16:04 UTC+0800 check ops status done(B ops_status:mongodb-trwkwn-switchover-bh6bl ns-gtubu Switchover mongodb-trwkwn mongodb-trwkwn-mongodb Succeed 1/1 Feb 12,2026 16:04 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-switchover-bh6bl --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-switchover-bh6bl patched  `kbcli cluster delete-ops --name mongodb-trwkwn-switchover-bh6bl --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-switchover-bh6bl deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d89aa3655e07f648b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:57:31.919+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:57:37.616+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:57:37.616+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:57:37.616+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:57:37.616+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d89b54857bd6ec48b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:03:12.857+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:03:18.841+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:03:18.842+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:03:18.842+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:03:18.842+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B switchover pod:mongodb-trwkwn-mongodb-0 switchover success(B test failover kill1(B check cluster status before cluster-failover-kill1 check cluster status done(B cluster_status:Running(B  `kill 1`(B  exec return message: check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 15:56 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 15:57 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 15:58 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-0;secondary(B: mongodb-trwkwn-mongodb-1 mongodb-trwkwn-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash`(B  connect checking...(B connect checking...(B check cluster connect done(B check failover pod name failover pod name:mongodb-trwkwn-mongodb-1 failover kill1 Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8a22c6988be1cf8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T07:58:09.794+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T07:58:14.726+00:00: You are running this process as the root user, which is not recommended 2026-02-12T07:58:14.726+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T07:58:14.726+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T07:58:14.727+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8a2dac9ce743108b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:06:31.503+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:06:39.815+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:06:39.815+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:06:39.815+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:06:39.815+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-1 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B cluster restart check cluster status before ops check cluster status done(B cluster_status:Running(B  `kbcli cluster restart mongodb-trwkwn --auto-approve --force=true --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-restart-5n6cd created successfully, you can view the progress: kbcli cluster describe-ops mongodb-trwkwn-restart-5n6cd -n ns-gtubu check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-restart-5n6cd ns-gtubu Restart mongodb-trwkwn mongodb Running -/- Feb 12,2026 16:07 UTC+0800 check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Updating Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B cluster_status:Updating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 16:08 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 16:09 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 16:07 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-2;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-1  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-restart-5n6cd ns-gtubu Restart mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 16:07 UTC+0800 check ops status done(B ops_status:mongodb-trwkwn-restart-5n6cd ns-gtubu Restart mongodb-trwkwn mongodb Succeed 3/3 Feb 12,2026 16:07 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-restart-5n6cd --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-restart-5n6cd patched  `kbcli cluster delete-ops --name mongodb-trwkwn-restart-5n6cd --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-restart-5n6cd deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8af607630dc2ea8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:08:11.837+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:08:18.230+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:08:18.231+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:08:18.231+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:08:18.231+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8b02fa90887f1e8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2026-02-12T08:09:58.193+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:10:03.006+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:10:03.006+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:10:03.006+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:10:03.006+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B cluster rebuild instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-trwkwn-rebuildinstance- namespace: ns-gtubu spec: type: RebuildInstance clusterName: mongodb-trwkwn force: true rebuildFrom: - componentName: mongodb instances: - name: mongodb-trwkwn-mongodb-0 inPlace: true check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_mongodb-trwkwn.yaml`(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-rebuildinstance-2cspn created create test_ops_cluster_mongodb-trwkwn.yaml Success(B  `rm -rf test_ops_cluster_mongodb-trwkwn.yaml`(B  check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn Feb 12,2026 16:11 UTC+0800 ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn Running -/- Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:0/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:1/5 ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Init:2/5 rebuild pod: mongodb-trwkwn-mongodb-0 status Init:4/5 rebuild pod: mongodb-trwkwn-mongodb-0 status PodInitializing rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:11 UTC+0800 (B rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running rebuild pod: mongodb-trwkwn-mongodb-0 status Running check ops status done(B ops_status:mongodb-trwkwn-rebuildinstance-2cspn ns-gtubu RebuildInstance mongodb-trwkwn mongodb Succeed 1/1 Feb 12,2026 16:11 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-rebuildinstance-2cspn --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-rebuildinstance-2cspn patched  `kbcli cluster delete-ops --name mongodb-trwkwn-rebuildinstance-2cspn --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-rebuildinstance-2cspn deleted check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb Delete Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 16:11 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 16:09 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 16:07 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-2;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-1  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash`(B  check cluster connect done(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8c5efe7c1fafa28b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:08:11.837+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:08:18.230+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:08:18.231+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:08:18.231+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:08:18.231+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8c69d9adccec6b8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2026-02-12T08:15:45.505+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:15:48.561+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:15:48.561+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:15:48.561+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:15:48.561+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check db_client batch [698] equal [440] data Failure(B check db_client batch [698] equal [440] data retry times: 1(B check db_client batch [440] equal [440] data Success(B cluster update terminationPolicy WipeOut  `kbcli cluster update mongodb-trwkwn --termination-policy=WipeOut --namespace ns-gtubu `(B  cluster.apps.kubeblocks.io/mongodb-trwkwn updated check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb WipeOut Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 16:11 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 16:09 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 16:07 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-2;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-1  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash`(B  check cluster connect done(B cluster datafile backup  `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.name}"`(B   `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.namespace}"`(B   `kubectl get secrets kb-backuprepo-pn64t -n kb-wrwyg -o jsonpath="{.data.accessKeyId}"`(B   `kubectl get secrets kb-backuprepo-pn64t -n kb-wrwyg -o jsonpath="{.data.secretAccessKey}"`(B  KUBEBLOCKS NAMESPACE:kb-wrwyg get kubeblocks namespace done(B  `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-wrwyg -o jsonpath="{.items[0].data.root-user}"`(B   `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-wrwyg -o jsonpath="{.items[0].data.root-password}"`(B  minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-wrwyg.svc.cluster.local:9000 list minio bucket kbcli-test  `echo 'mc alias set minioserver http://kbcli-test-minio.kb-wrwyg.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-546f6447c7-cvf8k --namespace kb-wrwyg -- bash`(B  list minio bucket done(B default backuprepo:backuprepo-kbcli-test exists(B  `kbcli cluster backup mongodb-trwkwn --method datafile --namespace ns-gtubu `(B  Backup backup-ns-gtubu-mongodb-trwkwn-20260212161731 created successfully, you can view the progress: kbcli cluster list-backups --names=backup-ns-gtubu-mongodb-trwkwn-20260212161731 -n ns-gtubu check backup status  `kbcli cluster list-backups mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-gtubu-mongodb-trwkwn-20260212161731 ns-gtubu mongodb-trwkwn datafile Running Delete Feb 12,2026 16:17 UTC+0800 backup_status:mongodb-trwkwn-datafile-Running(B backup_status:mongodb-trwkwn-datafile-Running(B check backup status done(B backup_status:backup-ns-gtubu-mongodb-trwkwn-20260212161731 ns-gtubu mongodb-trwkwn datafile Completed 389232 10s Delete Feb 12,2026 16:17 UTC+0800 Feb 12,2026 16:17 UTC+0800 (B cluster restore backup  `kbcli cluster describe-backup --names backup-ns-gtubu-mongodb-trwkwn-20260212161731 --namespace ns-gtubu `(B  Name: backup-ns-gtubu-mongodb-trwkwn-20260212161731 Cluster: mongodb-trwkwn Namespace: ns-gtubu Spec: Method: datafile Policy Name: mongodb-trwkwn-mongodb-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-gtubu-mongodb-trwkwn-20260212161731-0ee9d TargetPodName: mongodb-trwkwn-mongodb-0 Phase: Completed Start Time: Feb 12,2026 16:17 UTC+0800 Completion Time: Feb 12,2026 16:17 UTC+0800 Status: Phase: Completed Total Size: 389232 ActionSet Name: mongodb-physical-br Repository: backuprepo-kbcli-test Duration: 10s Start Time: Feb 12,2026 16:17 UTC+0800 Completion Time: Feb 12,2026 16:17 UTC+0800 Path: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212161731 Time Range Start: Feb 12,2026 16:17 UTC+0800 Time Range End: Feb 12,2026 16:17 UTC+0800 Warning Events:  `kbcli cluster restore mongodb-trwkwn-backup --backup backup-ns-gtubu-mongodb-trwkwn-20260212161731 --namespace ns-gtubu `(B  Cluster mongodb-trwkwn-backup created check cluster status  `kbcli cluster list mongodb-trwkwn-backup --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn-backup ns-gtubu mongodb WipeOut Creating Feb 12,2026 16:17 UTC+0800 clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn-backup --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-backup-mongodb-0 ns-gtubu mongodb-trwkwn-backup mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 16:18 UTC+0800 mongodb-trwkwn-backup-mongodb-1 ns-gtubu mongodb-trwkwn-backup mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 16:18 UTC+0800 mongodb-trwkwn-backup-mongodb-2 ns-gtubu mongodb-trwkwn-backup mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 16:20 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-backup-mongodb-0;secondary(B: mongodb-trwkwn-backup-mongodb-1 mongodb-trwkwn-backup-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn-backup`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-backup-mongodb-0 --namespace ns-gtubu -- bash`(B  check cluster connect done(B  `kbcli cluster describe-backup --names backup-ns-gtubu-mongodb-trwkwn-20260212161731 --namespace ns-gtubu `(B  Name: backup-ns-gtubu-mongodb-trwkwn-20260212161731 Cluster: mongodb-trwkwn Namespace: ns-gtubu Spec: Method: datafile Policy Name: mongodb-trwkwn-mongodb-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-gtubu-mongodb-trwkwn-20260212161731-0ee9d TargetPodName: mongodb-trwkwn-mongodb-0 Phase: Completed Start Time: Feb 12,2026 16:17 UTC+0800 Completion Time: Feb 12,2026 16:17 UTC+0800 Status: Phase: Completed Total Size: 389232 ActionSet Name: mongodb-physical-br Repository: backuprepo-kbcli-test Duration: 10s Start Time: Feb 12,2026 16:17 UTC+0800 Completion Time: Feb 12,2026 16:17 UTC+0800 Path: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212161731 Time Range Start: Feb 12,2026 16:17 UTC+0800 Time Range End: Feb 12,2026 16:17 UTC+0800 Warning Events: cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn-backup`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo " echo \"rs.status()\" | mongosh --host mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-backup-mongodb-0 --namespace ns-gtubu -- bash `(B  Current Mongosh Log ID: 698d8dddd4711f66858b79a1 Connecting to: mongodb://@mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:19:23.347+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:19:29.270+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:19:29.270+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:19:29.270+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:19:29.270+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-backup-mongodb [direct: primary] admin> { set: 'mongodb-trwkwn-backup-mongodb', date: ISODate('2026-02-12T08:23:00.740Z'), myState: 1, term: Long('11'), syncSourceHost: '', syncSourceId: -1, heartbeatIntervalMillis: Long('2000'), majorityVoteCount: 2, writeMajorityCount: 2, votingMembersCount: 3, writableVotingMembersCount: 3, optimes: { lastCommittedOpTime: { ts: Timestamp({ t: 1770884580, i: 2 }), t: Long('11') }, lastCommittedWallTime: ISODate('2026-02-12T08:23:00.433Z'), readConcernMajorityOpTime: { ts: Timestamp({ t: 1770884580, i: 2 }), t: Long('11') }, appliedOpTime: { ts: Timestamp({ t: 1770884580, i: 2 }), t: Long('11') }, durableOpTime: { ts: Timestamp({ t: 1770884580, i: 2 }), t: Long('11') }, writtenOpTime: { ts: Timestamp({ t: 1770884580, i: 2 }), t: Long('11') }, lastAppliedWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastDurableWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastWrittenWallTime: ISODate('2026-02-12T08:23:00.433Z') }, lastStableRecoveryTimestamp: Timestamp({ t: 1770884576, i: 4 }), electionCandidateMetrics: { lastElectionReason: 'priorityTakeover', lastElectionDate: ISODate('2026-02-12T08:22:27.167Z'), electionTerm: Long('11'), lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1770884545, i: 1 }), t: Long('10') }, lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1770884545, i: 1 }), t: Long('10') }, lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1770884545, i: 1 }), t: Long('10') }, numVotesNeeded: 2, priorityAtElection: 2, electionTimeoutMillis: Long('10000'), priorPrimaryMemberId: 1, numCatchUpOps: Long('0'), newTermStartDate: ISODate('2026-02-12T08:22:27.186Z'), wMajorityWriteAvailabilityDate: ISODate('2026-02-12T08:22:27.201Z') }, electionParticipantMetrics: { votedForCandidate: true, electionTerm: Long('10'), lastVoteDate: ISODate('2026-02-12T08:22:15.793Z'), electionCandidateMemberId: 1, voteReason: '', lastWrittenOpTimeAtElection: { ts: Timestamp({ t: 1770884518, i: 1 }), t: Long('-1') }, maxWrittenOpTimeInSet: { ts: Timestamp({ t: 1770884518, i: 1 }), t: Long('-1') }, lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1770884518, i: 1 }), t: Long('-1') }, maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1770884518, i: 1 }), t: Long('-1') }, priorityAtElection: 2 }, members: [ { _id: 0, name: 'mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 217, optime: { ts: Timestamp({ t: 1770884580, i: 2 }), t: Long('11') }, optimeDate: ISODate('2026-02-12T08:23:00.000Z'), optimeWritten: { ts: Timestamp({ t: 1770884580, i: 2 }), t: Long('11') }, optimeWrittenDate: ISODate('2026-02-12T08:23:00.000Z'), lastAppliedWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastDurableWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastWrittenWallTime: ISODate('2026-02-12T08:23:00.433Z'), syncSourceHost: '', syncSourceId: -1, infoMessage: '', electionTime: Timestamp({ t: 1770884547, i: 1 }), electionDate: ISODate('2026-02-12T08:22:27.000Z'), configVersion: 1, configTerm: 11, self: true, lastHeartbeatMessage: '' }, { _id: 1, name: 'mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 54, optime: { ts: Timestamp({ t: 1770884576, i: 4 }), t: Long('11') }, optimeDurable: { ts: Timestamp({ t: 1770884576, i: 4 }), t: Long('11') }, optimeWritten: { ts: Timestamp({ t: 1770884576, i: 4 }), t: Long('11') }, optimeDate: ISODate('2026-02-12T08:22:56.000Z'), optimeDurableDate: ISODate('2026-02-12T08:22:56.000Z'), optimeWrittenDate: ISODate('2026-02-12T08:22:56.000Z'), lastAppliedWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastDurableWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastWrittenWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastHeartbeat: ISODate('2026-02-12T08:22:59.532Z'), lastHeartbeatRecv: ISODate('2026-02-12T08:22:59.732Z'), pingMs: Long('8'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', syncSourceId: 0, infoMessage: '', configVersion: 1, configTerm: 11 }, { _id: 2, name: 'mongodb-trwkwn-backup-mongodb-2.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 55, optime: { ts: Timestamp({ t: 1770884576, i: 4 }), t: Long('11') }, optimeDurable: { ts: Timestamp({ t: 1770884576, i: 4 }), t: Long('11') }, optimeWritten: { ts: Timestamp({ t: 1770884576, i: 4 }), t: Long('11') }, optimeDate: ISODate('2026-02-12T08:22:56.000Z'), optimeDurableDate: ISODate('2026-02-12T08:22:56.000Z'), optimeWrittenDate: ISODate('2026-02-12T08:22:56.000Z'), lastAppliedWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastDurableWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastWrittenWallTime: ISODate('2026-02-12T08:23:00.433Z'), lastHeartbeat: ISODate('2026-02-12T08:22:59.532Z'), lastHeartbeatRecv: ISODate('2026-02-12T08:22:59.331Z'), pingMs: Long('8'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', syncSourceId: 1, infoMessage: '', configVersion: 1, configTerm: 11 } ], ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1770884580, i: 2 }), signature: { hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0), keyId: Long('0') } }, operationTime: Timestamp({ t: 1770884580, i: 2 }) } mongodb-trwkwn-backup-mongodb [direct: primary] admin> connect cluster Success(B delete cluster mongodb-trwkwn-backup  `kbcli cluster delete mongodb-trwkwn-backup --auto-approve --namespace ns-gtubu `(B  pod_info:mongodb-trwkwn-backup-mongodb-0 4/4 Running 0 4m25s mongodb-trwkwn-backup-mongodb-1 4/4 Running 0 4m5s mongodb-trwkwn-backup-mongodb-2 4/4 Running 0 2m22s Cluster mongodb-trwkwn-backup deleted pod_info:mongodb-trwkwn-backup-mongodb-0 4/4 Terminating 0 4m45s delete cluster pod done(B check cluster resource non-exist OK: pvc(B delete cluster done(B check resource cm non exists check resource cm non exists(B cluster rebuild instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-trwkwn-rebuildinstance- namespace: ns-gtubu spec: type: RebuildInstance clusterName: mongodb-trwkwn force: true rebuildFrom: - componentName: mongodb instances: - name: mongodb-trwkwn-mongodb-0 inPlace: true check cluster status before ops check cluster status done(B cluster_status:Running(B  `kubectl create -f test_ops_cluster_mongodb-trwkwn.yaml`(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-rebuildinstance-bz7d2 created create test_ops_cluster_mongodb-trwkwn.yaml Success(B  `rm -rf test_ops_cluster_mongodb-trwkwn.yaml`(B  check ops status  `kbcli cluster list-ops mongodb-trwkwn --status all --namespace ns-gtubu `(B  NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn Running -/- Feb 12,2026 16:23 UTC+0800 ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Running 0/1 Feb 12,2026 16:23 UTC+0800 (B check ops status done(B ops_status:mongodb-trwkwn-rebuildinstance-bz7d2 ns-gtubu RebuildInstance mongodb-trwkwn mongodb Succeed 1/1 Feb 12,2026 16:23 UTC+0800 (B  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge opsrequests.operations mongodb-trwkwn-rebuildinstance-bz7d2 --namespace ns-gtubu `(B  opsrequest.operations.kubeblocks.io/mongodb-trwkwn-rebuildinstance-bz7d2 patched  `kbcli cluster delete-ops --name mongodb-trwkwn-rebuildinstance-bz7d2 --force --auto-approve --namespace ns-gtubu `(B  OpsRequest mongodb-trwkwn-rebuildinstance-bz7d2 deleted check cluster status  `kbcli cluster list mongodb-trwkwn --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn ns-gtubu mongodb WipeOut Running Feb 12,2026 15:18 UTC+0800 app.kubernetes.io/instance=mongodb-trwkwn,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-mongodb-0 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 16:24 UTC+0800 mongodb-trwkwn-mongodb-1 ns-gtubu mongodb-trwkwn mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 16:09 UTC+0800 mongodb-trwkwn-mongodb-2 ns-gtubu mongodb-trwkwn mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 16:07 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-mongodb-2;secondary(B: mongodb-trwkwn-mongodb-0 mongodb-trwkwn-mongodb-1  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash`(B  check cluster connect done(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find()\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8e7c4b30d8130a8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:08:11.837+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:08:18.230+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:08:18.231+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:08:18.231+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:08:18.231+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: primary] admin> check cluster data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-mongodb-0 --namespace ns-gtubu -- bash `(B  check readonly data: Defaulted container "mongodb" out of: mongodb, mongodb-backup-agent, exporter, kbagent, init-syncer (init), init-kubectl (init), init-pbm-agent (init), init-kbagent (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 698d8e8891fdddff6e8b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb-ro.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2026-02-12T08:09:58.193+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:10:03.006+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:10:03.006+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:10:03.006+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:10:03.006+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: secondary] admin> [ { _id: ObjectId('698d80b922dc64734d8b79a2'), a: 'jyjav' } ] mongodb-trwkwn-mongodb [direct: secondary] admin> check cluster readonly data consistent Success(B  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check db_client batch data count  `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin" | kubectl exec -it mongodb-trwkwn-mongodb-2 --namespace ns-gtubu -- bash `(B  check db_client batch [440] equal [440] data Success(B cluster delete backup  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge backups backup-ns-gtubu-mongodb-trwkwn-20260212161731 --namespace ns-gtubu `(B  backup.dataprotection.kubeblocks.io/backup-ns-gtubu-mongodb-trwkwn-20260212161731 patched  `kbcli cluster delete-backup mongodb-trwkwn --name backup-ns-gtubu-mongodb-trwkwn-20260212161731 --force --auto-approve --namespace ns-gtubu `(B  Backup backup-ns-gtubu-mongodb-trwkwn-20260212161731 deleted cluster pbm-physical backup  `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.name}"`(B   `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.namespace}"`(B   `kubectl get secrets kb-backuprepo-pn64t -n kb-wrwyg -o jsonpath="{.data.accessKeyId}"`(B   `kubectl get secrets kb-backuprepo-pn64t -n kb-wrwyg -o jsonpath="{.data.secretAccessKey}"`(B  KUBEBLOCKS NAMESPACE:kb-wrwyg get kubeblocks namespace done(B  `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-wrwyg -o jsonpath="{.items[0].data.root-user}"`(B   `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-wrwyg -o jsonpath="{.items[0].data.root-password}"`(B  minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-wrwyg.svc.cluster.local:9000 list minio bucket kbcli-test  `echo 'mc alias set minioserver http://kbcli-test-minio.kb-wrwyg.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-546f6447c7-cvf8k --namespace kb-wrwyg -- bash`(B  list minio bucket done(B default backuprepo:backuprepo-kbcli-test exists(B  `kbcli cluster backup mongodb-trwkwn --method pbm-physical --namespace ns-gtubu `(B  Backup backup-ns-gtubu-mongodb-trwkwn-20260212162606 created successfully, you can view the progress: kbcli cluster list-backups --names=backup-ns-gtubu-mongodb-trwkwn-20260212162606 -n ns-gtubu check backup status  `kbcli cluster list-backups mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-gtubu-mongodb-trwkwn-20260212162606 ns-gtubu mongodb-trwkwn pbm-physical Running Delete Feb 12,2026 16:26 UTC+0800 backup_status:mongodb-trwkwn-pbm-physical-Running(B backup_status:mongodb-trwkwn-pbm-physical-Running(B backup_status:mongodb-trwkwn-pbm-physical-Running(B backup_status:mongodb-trwkwn-pbm-physical-Running(B check backup status done(B backup_status:backup-ns-gtubu-mongodb-trwkwn-20260212162606 ns-gtubu mongodb-trwkwn pbm-physical Completed 588450 21s Delete Feb 12,2026 16:26 UTC+0800 Feb 12,2026 16:26 UTC+0800 (B cluster restore backup  `kbcli cluster describe-backup --names backup-ns-gtubu-mongodb-trwkwn-20260212162606 --namespace ns-gtubu `(B  Name: backup-ns-gtubu-mongodb-trwkwn-20260212162606 Cluster: mongodb-trwkwn Namespace: ns-gtubu Spec: Method: pbm-physical Policy Name: mongodb-trwkwn-mongodb-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-gtubu-mongodb-trwkwn-20260212162606-8e203 TargetPodName: mongodb-trwkwn-mongodb-0 Phase: Completed Start Time: Feb 12,2026 16:26 UTC+0800 Completion Time: Feb 12,2026 16:26 UTC+0800 Extras: =================== 1 =================== backupName: 2026-02-12T08:26:14Z backupType: physical lastWriteTime: 2026-02-12T08:26:16Z Status: Phase: Completed Total Size: 588450 ActionSet Name: mongodb-rs-pbm-physical Repository: backuprepo-kbcli-test Duration: 21s Start Time: Feb 12,2026 16:26 UTC+0800 Completion Time: Feb 12,2026 16:26 UTC+0800 Path: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212162606 Time Range Start: Feb 12,2026 16:26 UTC+0800 Time Range End: Feb 12,2026 16:26 UTC+0800 Warning Events:  `kbcli cluster restore mongodb-trwkwn-backup --backup backup-ns-gtubu-mongodb-trwkwn-20260212162606 --namespace ns-gtubu `(B  Cluster mongodb-trwkwn-backup created check cluster status  `kbcli cluster list mongodb-trwkwn-backup --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn-backup ns-gtubu mongodb WipeOut Creating Feb 12,2026 16:26 UTC+0800 clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn-backup --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-backup-mongodb-0 ns-gtubu mongodb-trwkwn-backup mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 16:27 UTC+0800 mongodb-trwkwn-backup-mongodb-1 ns-gtubu mongodb-trwkwn-backup mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 16:27 UTC+0800 mongodb-trwkwn-backup-mongodb-2 ns-gtubu mongodb-trwkwn-backup mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 16:28 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-backup-mongodb-0;secondary(B: mongodb-trwkwn-backup-mongodb-1 mongodb-trwkwn-backup-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn-backup`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-backup-mongodb-0 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check backup restore post ready check backup restore post ready exists(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 23s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 31s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 41s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 51s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 62s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 72s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 82s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 92s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 103s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 113s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 2m3s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 2m13s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 2m23s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 2m34s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 2m44s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 2m54s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 3m4s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 3m15s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 3m25s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 3m35s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 3m45s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 3m56s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 4m6s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 4m16s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 4m26s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 4m36s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 4m47s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 4m57s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 2/2 Running 0 5m7s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 5m17s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 4s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 5m28s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 14s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 5m38s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 24s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 5m48s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 34s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 5m58s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 44s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 6m8s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 55s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 6m19s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 65s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 6m29s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 75s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 6m39s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 85s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 6m49s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 96s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 7m(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 106s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 7m10s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 2/2 Running 0 116s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 7m20s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 2m6s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 7m30s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 2m17s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 7m41s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 2m27s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 7m51s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 2m37s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 8m1s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 2m47s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 8m11s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 2m57s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 8m21s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 3m8s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 8m32s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 3m18s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 8m42s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 3m28s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 8m52s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 3m38s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 9m2s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 3m49s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 9m13s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 3m59s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 9m23s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 4m9s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 9m33s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 4m19s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 9m43s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 4m30s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 9m54s(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 4m40s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 10m(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 4m50s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 10m(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 5m restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 10m(B post_ready_pod_status:restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 0/2 Error 0 5m10s restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc 0/2 Error 0 10m(B [Error] check backup restore post ready timeout(B --------------------------------------get pod restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc yaml--------------------------------------  `kubectl get pod restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 -o yaml --namespace ns-gtubu `(B  apiVersion: v1 kind: Pod metadata: annotations: dataprotection.kubeblocks.io/backup-extras: '[{"backup_name":"2026-02-12T08:26:14Z","backup_type":"physical","last_write_time":"2026-02-12T08:26:16Z"}]' dataprotection.kubeblocks.io/stop-restore-manager: "true" creationTimestamp: "2026-02-12T08:34:39Z" generateName: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0- labels: app.kubernetes.io/managed-by: kubeblocks-dataprotection batch.kubernetes.io/controller-uid: 7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 batch.kubernetes.io/job-name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 controller-uid: 7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 dataprotection.kubeblocks.io/restore: mongodb-trwkwn-backup-mongodb-9da370b0-postready job-name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 namespace: ns-gtubu ownerReferences: - apiVersion: batch/v1 blockOwnerDeletion: true controller: true kind: Job name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 uid: 7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 resourceVersion: "504686" uid: ac0b78b6-7899-4f47-96ce-4b610ffb7d57 spec: containers: - command: - bash - -c - "#!/bin/bash\n# shellcheck disable=SC2086\n\nfunction handle_exit() {\n exit_code=$?\n \ if [ $exit_code -ne 0 ]; then\n echo \"failed with exit code $exit_code\"\n \ touch \"${DP_BACKUP_INFO_FILE}.exit\"\n exit 1\n fi\n}\n# log info file\nfunction DP_log() {\n msg=$1\n local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S')\n \ echo \"${curr_date} INFO: $msg\"\n}\n\n# log error info\nfunction DP_error_log() {\n msg=$1\n local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S')\n echo \"${curr_date} ERROR: $msg\"\n}\n\nfunction buildJsonString() {\n local jsonString=${1}\n \ local key=${2}\n local value=${3}\n if [ ! -z \"$jsonString\" ];then\n \ jsonString=\"${jsonString},\"\n fi\n echo \"${jsonString}\\\"${key}\\\":\\\"${value}\\\"\"\n}\n\n# Save backup status info file for syncing progress.\n# timeFormat: %Y-%m-%dT%H:%M:%SZ\nfunction DP_save_backup_status_info() {\n export PATH=\"$PATH:$DP_DATASAFED_BIN_PATH\"\n \ export DATASAFED_BACKEND_BASE_PATH=\"$DP_BACKUP_BASE_PATH\"\n \n local totalSize=$1\n local startTime=$2\n local stopTime=$3\n local timeZone=$4\n \ local extras=$5\n local timeZoneStr=\"\"\n if [ ! -z ${timeZone} ]; then\n timeZoneStr=\",\\\"timeZone\\\":\\\"${timeZone}\\\"\"\n fi\n \ if [ -z \"${stopTime}\" ];then\n echo \"{\\\"totalSize\\\":\\\"${totalSize}\\\"}\" > ${DP_BACKUP_INFO_FILE}\n elif [ -z \"${startTime}\" ];then\n echo \"{\\\"totalSize\\\":\\\"${totalSize}\\\",\\\"extras\\\":[${extras}],\\\"timeRange\\\":{\\\"end\\\":\\\"${stopTime}\\\"${timeZoneStr}}}\" > ${DP_BACKUP_INFO_FILE}\n else\n echo \"{\\\"totalSize\\\":\\\"${totalSize}\\\",\\\"extras\\\":[${extras}],\\\"timeRange\\\":{\\\"start\\\":\\\"${startTime}\\\",\\\"end\\\":\\\"${stopTime}\\\"${timeZoneStr}}}\" > ${DP_BACKUP_INFO_FILE}\n fi\n}\n\nfunction getToolConfigValue() {\n local var=$1\n cat \"$toolConfig\" | grep \"$var\" | awk '{print $NF}'\n}\n\nfunction set_backup_config_env() {\n toolConfig=/etc/datasafed/datasafed.conf\n if [ ! -f ${toolConfig} ]; then\n DP_error_log \"Config file not found: ${toolConfig}\"\n \ exit 1\n fi\n\n local provider=\"\"\n local access_key_id=\"\"\n local secret_access_key=\"\"\n local region=\"\"\n local endpoint=\"\"\n local bucket=\"\"\n\n IFS=$'\\n'\n for line in $(cat ${toolConfig}); do\n line=$(eval echo $line)\n if [[ $line == \"access_key_id\"* ]]; then\n access_key_id=$(getToolConfigValue \"$line\")\n elif [[ $line == \"secret_access_key\"* ]]; then\n secret_access_key=$(getToolConfigValue \"$line\")\n elif [[ $line == \"region\"* ]]; then\n region=$(getToolConfigValue \"$line\")\n elif [[ $line == \"endpoint\"* ]]; then\n endpoint=$(getToolConfigValue \"$line\")\n elif [[ $line == \"root\"* ]]; then\n bucket=$(getToolConfigValue \"$line\")\n elif [[ $line == \"provider\"* ]]; then\n provider=$(getToolConfigValue \"$line\")\n fi\n done\n\n if [[ ! $endpoint =~ ^https?:// ]]; then\n endpoint=\"https://${endpoint}\"\n \ fi\n\n if [[ \"$provider\" == \"Alibaba\" ]]; then\n regex='https?:\\/\\/oss-(.*?)\\.aliyuncs\\.com'\n \ if [[ \"$endpoint\" =~ $regex ]]; then\n region=\"${BASH_REMATCH[1]}\"\n \ DP_log \"Extract region from $endpoint-> $region\"\n else\n DP_log \"Failed to extract region from endpoint: $endpoint\"\n fi\n elif [[ \"$provider\" == \"TencentCOS\" ]]; then\n regex='https?:\\/\\/cos\\.(.*?)\\.myqcloud\\.com'\n \ if [[ \"$endpoint\" =~ $regex ]]; then\n region=\"${BASH_REMATCH[1]}\"\n \ DP_log \"Extract region from $endpoint-> $region\"\n else\n DP_log \"Failed to extract region from endpoint: $endpoint\"\n fi\n elif [[ \"$provider\" == \"Minio\" ]]; then\n export S3_FORCE_PATH_STYLE=\"true\"\n else\n echo \"Unsupported provider: $provider\"\n fi\n backup_path=$(dirname \"$DP_BACKUP_BASE_PATH\")\n\n \ export S3_ACCESS_KEY=\"${access_key_id}\"\n export S3_SECRET_KEY=\"${secret_access_key}\"\n \ export S3_REGION=\"${region}\"\n export S3_ENDPOINT=\"${endpoint}\"\n export S3_BUCKET=\"${bucket}\"\n export S3_PREFIX=\"${backup_path#/}/$PBM_BACKUP_DIR_NAME\"\n \ \n DP_log \"storage config have been extracted.\"\n}\n\n# config backup agent\ngenerate_endpoints() {\n local fqdns=$1\n local port=$2\n\n if [ -z \"$fqdns\" ]; then\n \ echo \"ERROR: No FQDNs provided for endpoints.\" >&2\n exit 1\n \ fi\n\n IFS=',' read -ra fqdn_array <<< \"$fqdns\"\n local endpoints=()\n\n \ for fqdn in \"${fqdn_array[@]}\"; do\n trimmed_fqdn=$(echo \"$fqdn\" | xargs)\n if [[ -n \"$trimmed_fqdn\" ]]; then\n endpoints+=(\"${trimmed_fqdn}:${port}\")\n \ fi\n done\n\n IFS=','; echo \"${endpoints[*]}\"\n}\n\nfunction export_pbm_env_vars() {\n export PBM_AGENT_MONGODB_USERNAME=\"$MONGODB_USER\"\n \ export PBM_AGENT_MONGODB_PASSWORD=\"$MONGODB_PASSWORD\"\n \n cfg_server_endpoints=\"$(generate_endpoints \"$CFG_SERVER_POD_FQDN_LIST\" \"$CFG_SERVER_INTERNAL_PORT\")\"\n export PBM_MONGODB_URI=\"mongodb://$PBM_AGENT_MONGODB_USERNAME:$PBM_AGENT_MONGODB_PASSWORD@$cfg_server_endpoints/?authSource=admin&replSetName=$CFG_SERVER_REPLICA_SET_NAME\"\n}\n\nfunction export_pbm_env_vars_for_rs() {\n export PBM_AGENT_MONGODB_USERNAME=\"$MONGODB_USER\"\n \ export PBM_AGENT_MONGODB_PASSWORD=\"$MONGODB_PASSWORD\"\n\n mongodb_endpoints=\"$(generate_endpoints \"$MONGODB_POD_FQDN_LIST\" \"$KB_SERVICE_PORT\")\"\n export PBM_MONGODB_URI=\"mongodb://$PBM_AGENT_MONGODB_USERNAME:$PBM_AGENT_MONGODB_PASSWORD@$mongodb_endpoints/?authSource=admin&replSetName=$MONGODB_REPLICA_SET_NAME\"\n}\n\nfunction sync_pbm_storage_config() {\n echo \"INFO: Checking if PBM storage config exists\"\n \ pbm_config_exists=true\n check_config=$(pbm config --mongodb-uri \"$PBM_MONGODB_URI\" -o json) || {\n pbm_config_exists=false\n echo \"INFO: PBM storage config does not exist.\"\n }\n if [ \"$pbm_config_exists\" = \"true\" ]; then\n # check_config=$(pbm config --mongodb-uri \"$PBM_MONGODB_URI\" -o json)\n current_endpoint=$(echo \"$check_config\" | jq -r '.storage.s3.endpointUrl')\n current_region=$(echo \"$check_config\" | jq -r '.storage.s3.region')\n current_bucket=$(echo \"$check_config\" | jq -r '.storage.s3.bucket')\n current_prefix=$(echo \"$check_config\" | jq -r '.storage.s3.prefix')\n echo \"INFO: Current PBM storage endpoint: $current_endpoint\"\n echo \"INFO: Current PBM storage region: $current_region\"\n \ echo \"INFO: Current PBM storage bucket: $current_bucket\"\n echo \"INFO: Current PBM storage prefix: $current_prefix\"\n if [ \"$current_prefix\" = \"$S3_PREFIX\" ] && [ \"$current_region\" = \"$S3_REGION\" ] && [ \"$current_bucket\" = \"$S3_BUCKET\" ] && [ \"$current_endpoint\" = \"$S3_ENDPOINT\" ]; then\n echo \"INFO: PBM storage config already exists.\"\n else\n pbm_config_exists=false\n \ fi\n fi\n if [ \"$pbm_config_exists\" = \"false\" ]; then\n cat < /dev/null\nstorage:\n \ type: s3\n s3:\n region: ${S3_REGION}\n bucket: ${S3_BUCKET}\n prefix: ${S3_PREFIX}\n endpointUrl: ${S3_ENDPOINT}\n forcePathStyle: ${S3_FORCE_PATH_STYLE:-false}\n \ credentials:\n access-key-id: ${S3_ACCESS_KEY}\n secret-access-key: ${S3_SECRET_KEY}\nrestore:\n numDownloadWorkers: ${PBM_RESTORE_DOWNLOAD_WORKERS:-4}\nbackup:\n \ timeouts:\n startingStatus: 60\nEOF\n sleep 5\n echo \"INFO: PBM storage configuration completed.\"\n fi\n}\n\nfunction print_pbm_logs_by_event() {\n local pbm_event=$1\n # echo \"INFO: Printing PBM logs by event: $pbm_event\"\n \ # shellcheck disable=SC2328\n local pbm_logs=$(pbm logs -e $pbm_event --tail 200 --mongodb-uri \"$PBM_MONGODB_URI\" > /dev/null)\n local purged_logs=$(echo \"$pbm_logs\" | awk -v start=\"$PBM_LOGS_START_TIME\" '$1 >= start')\n if [ -z \"$purged_logs\" ]; then\n return\n fi\n echo \"$purged_logs\"\n # echo \"INFO: PBM logs by event: $pbm_event printed.\"\n}\n\nfunction print_pbm_tail_logs() {\n echo \"INFO: Printing PBM tail logs\"\n pbm logs --tail 20 --mongodb-uri \"$PBM_MONGODB_URI\"\n}\n\nfunction handle_backup_exit() {\n exit_code=$?\n \ set +e\n if [ $exit_code -ne 0 ]; then\n print_pbm_tail_logs\n\n echo \"failed with exit code $exit_code\"\n touch \"${DP_BACKUP_INFO_FILE}.exit\"\n \ exit 1\n fi\n}\n\nfunction handle_restore_exit() {\n exit_code=$?\n set +e\n if [ $exit_code -ne 0 ]; then\n print_pbm_tail_logs\n\n echo \"failed with exit code $exit_code\"\n exit 1\n fi\n}\n\nfunction handle_pitr_exit() {\n exit_code=$?\n set +e\n if [[ \"$PBM_DISABLE_PITR_WHEN_EXIT\" == \"true\" ]]; then\n disable_pitr\n fi\n\n if [ $exit_code -ne 0 ]; then\n print_pbm_tail_logs\n\n \ echo \"failed with exit code $exit_code\"\n touch \"${DP_BACKUP_INFO_FILE}.exit\"\n \ exit 1\n fi\n}\n\nfunction wait_for_other_operations() {\n status_result=$(pbm status --mongodb-uri \"$PBM_MONGODB_URI\" -o json) || {\n echo \"INFO: PBM is not configured.\"\n return\n }\n local except_type=$1\n local running_status=$(echo \"$status_result\" | jq -r '.running')\n local retry_count=0\n local max_retries=60\n \ while [ -n \"$running_status\" ] && [ \"$running_status\" != \"{}\" ] && [ $retry_count -lt $max_retries ]; do\n retry_count=$((retry_count+1))\n local running_type=$(echo \"$running_status\" | jq -r '.type')\n if [ -n \"$running_type\" ] && [ \"$running_type\" = \"$except_type\" ]; then\n break\n fi\n echo \"INFO: Other operation $running_type is running, waiting... ($retry_count/$max_retries)\"\n \ sleep 5\n running_status=$(pbm status --mongodb-uri \"$PBM_MONGODB_URI\" -o json | jq -r '.running')\n done\n if [ $retry_count -ge $max_retries ]; then\n echo \"ERROR: Other operations are still running after $max_retries retries\"\n exit 1\n fi\n}\n\nfunction export_logs_start_time_env() {\n \ local logs_start_time=$(date +\"%Y-%m-%dT%H:%M:%SZ\")\n export PBM_LOGS_START_TIME=\"${logs_start_time}\"\n}\n\nfunction sync_pbm_config_from_storage() {\n echo \"INFO: Syncing PBM config from storage...\"\n\n \ wait_for_other_operations\n\n pbm config --force-resync --mongodb-uri \"$PBM_MONGODB_URI\"\n \ # print_pbm_logs_by_event \"resync\"\n \n # resync wait flag might don't work\n wait_for_other_operations\n\n echo \"INFO: PBM config synced from storage.\"\n}\n\nfunction wait_for_backup_completion() {\n describe_result=\"\"\n local retry_interval=5\n \ local attempt=1\n local max_retries=12\n set +e\n while true; do\n describe_result=$(pbm describe-backup --mongodb-uri \"$PBM_MONGODB_URI\" \"$backup_name\" -o json 2>&1)\n if [ $? -eq 0 ] && [ -n \"$describe_result\" ]; then\n backup_status=$(echo \"$describe_result\" | jq -r '.status')\n if [ \"$backup_status\" = \"starting\" ] || [ \"$backup_status\" = \"running\" ]; then\n echo \"INFO: Backup status is $backup_status, retrying in ${retry_interval}s...\"\n elif [ \"$backup_status\" = \"\" ]; then\n echo \"INFO: Backup status is $backup_status, retrying in ${retry_interval}s...\"\n attempt=$((attempt+1))\n elif [ \"$backup_status\" = \"done\" ]; then\n echo \"INFO: Backup status is done.\"\n break\n else\n echo \"ERROR: Backup failed with status: $backup_status\"\n exit 1\n fi\n elif echo \"$describe_result\" | grep -q \"not found\"; then\n echo \"INFO: Backup metadata not found, retrying in ${retry_interval}s...\"\n attempt=$((attempt+1))\n else\n \ echo \"ERROR: Unexpected: $describe_result\"\n exit 1\n fi\n sleep $retry_interval\n if [ $attempt -gt $max_retries ]; then\n echo \"ERROR: Failed to get backup status after $max_retries attempts\"\n exit 1\n fi\n \ done\n set -e\n\n backup_status=$(echo \"$describe_result\" | jq -r '.status')\n \ if [ \"$backup_status\" != \"done\" ]; then\n echo \"ERROR: Backup did not complete successfully, final status: $backup_status\"\n exit 1\n fi\n}\n\nfunction create_restore_signal() {\n phase=$1\n kubectl apply -f - <&1)\n kubectl_get_exit_code=$?\n set -e\n # Wait for the restore signal ConfigMap to be created or updated\n if [[ \"$kubectl_get_exit_code\" -ne 0 ]]; then\n if [[ \"$kubectl_get_result\" == *\"not found\"* ]]; then\n create_restore_signal \"start\"\n fi\n \ else\n annotation_value=$(echo \"$kubectl_get_result\" | jq -r '.metadata.labels[\"apps.kubeblocks.io/restore-mongodb-shard\"] // empty')\n \ if [[ \"$annotation_value\" == \"start\" ]]; then\n break\n \ elif [[ \"$annotation_value\" == \"end\" ]]; then\n echo \"INFO: Restore completed, exiting.\"\n exit 0\n else\n \ echo \"INFO: Restore start signal is $annotation_value, updating...\"\n \ create_restore_signal \"start\"\n fi\n fi\n \ sleep 1\n done\n sleep 5\n echo \"INFO: Prepare restore start signal completed.\"\n}\n\nfunction process_restore_end_signal() {\n echo \"INFO: Waiting for prepare restore end signal...\"\n sleep 5\n dp_cm_name=\"$CLUSTER_NAME-restore-signal\"\n \ dp_cm_namespace=\"$CLUSTER_NAMESPACE\"\n while true; do\n set +e\n kubectl_get_result=$(kubectl get configmap $dp_cm_name -n $dp_cm_namespace -o json 2>&1)\n kubectl_get_exit_code=$?\n set -e\n # Wait for the restore signal ConfigMap to be created or updated\n if [[ \"$kubectl_get_exit_code\" -ne 0 ]]; then\n if [[ \"$kubectl_get_result\" == *\"not found\"* ]]; then\n create_restore_signal \"end\"\n fi\n else\n \ annotation_value=$(echo \"$kubectl_get_result\" | jq -r '.metadata.labels[\"apps.kubeblocks.io/restore-mongodb-shard\"] // empty')\n if [[ \"$annotation_value\" == \"end\" ]]; then\n break\n \ else\n echo \"INFO: Restore end signal is $annotation_value, updating...\"\n create_restore_signal \"end\"\n fi\n \ fi\n sleep 1\n done\n echo \"INFO: Prepare restore end signal completed.\"\n}\n\nfunction get_describe_backup_info() {\n describe_result=\"\"\n \ local max_retries=60\n local retry_interval=5\n local attempt=1\n set +e\n \ while [ $attempt -le $max_retries ]; do\n describe_result=$(pbm describe-backup --mongodb-uri \"$PBM_MONGODB_URI\" \"$backup_name\" -o json 2>&1)\n if [ $? -eq 0 ] && [ -n \"$describe_result\" ]; then\n break\n elif echo \"$describe_result\" | grep -q \"not found\"; then\n echo \"INFO: Attempt $attempt: backup $backup_name not found, retrying in ${retry_interval}s...\"\n \ if [ $((attempt % 30)) -eq 29 ]; then\n echo \"INFO: Sync PBM config from storage again.\"\n sync_pbm_config_from_storage\n \ fi\n sleep $retry_interval\n ((attempt++))\n continue\n \ else\n echo \"ERROR: Failed to get backup metadata: $describe_result\"\n \ exit 1\n fi\n done\n set -e\n\n if [ -z \"$describe_result\" ] || echo \"$describe_result\" | grep -q \"not found\"; then\n echo \"ERROR: Failed to get backup metadata after $max_retries attempts\"\n exit 1\n \ fi\n}\n\nfunction wait_for_restoring() {\n local cnf_file=\"${MOUNT_DIR}/tmp/pbm_restore.cnf\"\n \ cat < ${MOUNT_DIR}/tmp/pbm_restore.cnf\nstorage:\n type: s3\n s3:\n \ region: ${S3_REGION}\n bucket: ${S3_BUCKET}\n prefix: ${S3_PREFIX}\n \ endpointUrl: ${S3_ENDPOINT}\n forcePathStyle: ${S3_FORCE_PATH_STYLE:-false}\n \ credentials:\n access-key-id: ${S3_ACCESS_KEY}\n secret-access-key: ${S3_SECRET_KEY}\nEOF\n local attempt=0\n local max_retries=12\n local try_interval=5\n \ while true; do\n restore_status=$(pbm describe-restore \"$restore_name\" -c $cnf_file -o json | jq -r '.status') \n echo \"INFO: Restore $restore_name status: $restore_status, retrying in ${try_interval}s...\"\n if [ \"$restore_status\" = \"done\" ]; then\n rm $cnf_file\n break\n elif [ \"$restore_status\" = \"starting\" ] || [ \"$restore_status\" = \"running\" ]; then\n sleep $try_interval\n elif [ \"$restore_status\" = \"\" ]; then\n sleep $try_interval\n \ attempt=$((attempt+1))\n if [ $attempt -gt $max_retries ]; then\n \ echo \"ERROR: Restore $restore_name status is still empty after $max_retries retries\"\n rm $cnf_file\n exit 1\n fi\n else\n rm $cnf_file\n exit 1\n fi\n done\n}\n#!/bin/bash\nset -e\nset -o pipefail\nexport PATH=\"$PATH:$DP_DATASAFED_BIN_PATH:$MOUNT_DIR/tmp/bin\"\nexport DATASAFED_BACKEND_BASE_PATH=\"$DP_BACKUP_BASE_PATH\"\n\nexport_pbm_env_vars_for_rs\n\nset_backup_config_env\n\nexport_logs_start_time_env\n\ntrap handle_restore_exit EXIT\n\nwait_for_other_operations\n\nsync_pbm_storage_config\n\nsync_pbm_config_from_storage\n\nextras=$(cat /dp_downward/status_extras)\nbackup_name=$(echo \"$extras\" | jq -r '.[0].backup_name')\nbackup_type=$(echo \"$extras\" | jq -r '.[0].backup_type')\n\nif [ -z \"$backup_type\" ] || [ -z \"$backup_name\" ]; then\n echo \"ERROR: Backup type or backup name is empty, skip restore.\"\n exit 1\nfi\n\nget_describe_backup_info\n\nrs_name=$(echo \"$describe_result\" | jq -r '.replsets[0].name')\nmappings=\"$MONGODB_REPLICA_SET_NAME=$rs_name\"\necho \"INFO: Replica set mappings: $mappings\"\n\nprocess_restore_start_signal\n\nwait_for_other_operations\n\nrestore_name=$(pbm restore $backup_name --mongodb-uri \"$PBM_MONGODB_URI\" --replset-remapping \"$mappings\" -o json | jq -r '.name')\n\nwait_for_restoring\n\nprocess_restore_end_signal\n" env: - name: DP_BACKUP_NAME value: backup-ns-gtubu-mongodb-trwkwn-20260212162606 - name: DP_TARGET_RELATIVE_PATH - name: DP_BACKUP_ROOT_PATH value: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb - name: DP_BACKUP_BASE_PATH value: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212162606 - name: DP_BACKUP_STOP_TIME value: "2026-02-12T08:26:18Z" - name: DATA_DIR value: /data/mongodb/db - name: MOUNT_DIR value: /data/mongodb - name: PBM_BACKUP_DIR_NAME value: pbm-backups - name: PBM_BACKUP_TYPE value: physical - name: PBM_COMPRESSION value: s2 - name: PBM_RESTORE_DOWNLOAD_WORKERS value: "4" - name: PBM_IMAGE_TAG value: 2.12.0 - name: PSM_IMAGE_TAG value: 8.0.17 - name: MONGODB_USER valueFrom: secretKeyRef: key: username name: mongodb-trwkwn-backup-mongodb-account-root - name: MONGODB_PASSWORD valueFrom: secretKeyRef: key: password name: mongodb-trwkwn-backup-mongodb-account-root - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: PATH value: /tools/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - name: KB_SERVICE_CHARACTER_TYPE value: mongodb - name: SERVICE_PORT value: $(KB_SERVICE_PORT) - name: MONGODB_ROOT_USER value: $(MONGODB_USER) - name: MONGODB_ROOT_PASSWORD value: $(MONGODB_PASSWORD) - name: MONGODB_GRANT_ANYACTION_PRIVILEGE value: "true" - name: PBM_AGENT_MONGODB_USERNAME value: $(MONGODB_USER) - name: PBM_AGENT_MONGODB_PASSWORD value: $(MONGODB_PASSWORD) - name: PBM_MONGODB_REPLICA_SET value: $(KB_CLUSTER_COMP_NAME) - name: PBM_AGENT_SIDECAR value: "true" - name: PBM_AGENT_SIDECAR_SLEEP value: "5" - name: PBM_MONGODB_URI value: mongodb://$(PBM_AGENT_MONGODB_USERNAME):$(PBM_AGENT_MONGODB_PASSWORD)@localhost:$(KB_SERVICE_PORT)/?authSource=admin - name: KB_POD_FQDN value: $(POD_NAME).$(CLUSTER_COMPONENT_NAME)-headless.$(CLUSTER_NAMESPACE).svc - name: DP_DB_USER valueFrom: secretKeyRef: key: username name: mongodb-trwkwn-backup-mongodb-account-root - name: DP_DB_PASSWORD valueFrom: secretKeyRef: key: password name: mongodb-trwkwn-backup-mongodb-account-root - name: DP_DB_PORT value: "27017" - name: DP_DB_HOST value: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless - name: DP_DATASAFED_BIN_PATH value: /bin/datasafed envFrom: - configMapRef: name: mongodb-trwkwn-backup-mongodb-env optional: false image: docker.io/apecloud/percona-backup-mongodb:2.12.0 imagePullPolicy: IfNotPresent name: restore resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /data/mongodb name: data - mountPath: /dp_downward/ name: downward-volume - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9pxpz readOnly: true - args: - |2 set -o errexit set -o nounset sleep_seconds="1" signal_file="/dp_downward/stop_restore_manager" if [ "$sleep_seconds" -le 0 ]; then sleep_seconds=2 fi while true; do if [ -f "$signal_file" ] && [ "$(cat "$signal_file")" = "true" ]; then break fi echo "waiting for other restore workloads, sleep ${sleep_seconds}s" sleep "$sleep_seconds" done echo "restore manager stopped" command: - sh - -c env: - name: DP_DATASAFED_BIN_PATH value: /bin/datasafed image: docker.io/apecloud/kubeblocks-tools:1.0.2 imagePullPolicy: IfNotPresent name: restore-manager resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /dp_downward name: downward-volume-sidecard - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9pxpz readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: kbcli-test-registry-key initContainers: - command: - /bin/sh - -c - /scripts/install-datasafed.sh /bin/datasafed image: docker.io/apecloud/datasafed:0.2.3 imagePullPolicy: IfNotPresent name: dp-copy-datasafed resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: allowPrivilegeEscalation: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9pxpz readOnly: true nodeName: aks-cicdamdpool-14916756-vmss000002 nodeSelector: kubernetes.io/hostname: aks-cicdamdpool-14916756-vmss000002 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: runAsUser: 0 serviceAccount: kubeblocks-dataprotection-worker serviceAccountName: kubeblocks-dataprotection-worker terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: data persistentVolumeClaim: claimName: data-mongodb-trwkwn-backup-mongodb-0 - downwardAPI: defaultMode: 420 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['dataprotection.kubeblocks.io/backup-extras'] path: status_extras name: downward-volume - downwardAPI: defaultMode: 420 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['dataprotection.kubeblocks.io/stop-restore-manager'] path: stop_restore_manager name: downward-volume-sidecard - name: dp-datasafed-config secret: defaultMode: 420 secretName: tool-config-backuprepo-kbcli-test-4fs2t9 - emptyDir: {} name: dp-datasafed-bin - name: kube-api-access-9pxpz projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-02-12T08:36:45Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-02-12T08:34:40Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-02-12T08:36:41Z" reason: PodFailed status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2026-02-12T08:36:41Z" reason: PodFailed status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-02-12T08:34:39Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://3611fad73a5e87bd245d835c37578806188a21f9e700aa7f560f3b745b35ae86 image: docker.io/apecloud/percona-backup-mongodb:2.12.0 imageID: docker.io/apecloud/percona-backup-mongodb@sha256:16a65d6189650fa7c2bb8de02064fa94fed63c38665f67dff7d7355b66bd144d lastState: {} name: restore ready: false restartCount: 0 started: false state: terminated: containerID: containerd://3611fad73a5e87bd245d835c37578806188a21f9e700aa7f560f3b745b35ae86 exitCode: 1 finishedAt: "2026-02-12T08:36:41Z" reason: Error startedAt: "2026-02-12T08:34:41Z" volumeMounts: - mountPath: /data/mongodb name: data - mountPath: /dp_downward/ name: downward-volume - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true recursiveReadOnly: Disabled - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9pxpz readOnly: true recursiveReadOnly: Disabled - containerID: containerd://fd2710fec2dfa00a17776ff042741fdf67567c7bf6c77637a423cbdf6a849d69 image: docker.io/apecloud/kubeblocks-tools:1.0.2 imageID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea lastState: {} name: restore-manager ready: false restartCount: 0 started: false state: terminated: containerID: containerd://fd2710fec2dfa00a17776ff042741fdf67567c7bf6c77637a423cbdf6a849d69 exitCode: 0 finishedAt: "2026-02-12T08:36:43Z" reason: Completed startedAt: "2026-02-12T08:34:41Z" volumeMounts: - mountPath: /dp_downward name: downward-volume-sidecard - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true recursiveReadOnly: Disabled - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9pxpz readOnly: true recursiveReadOnly: Disabled hostIP: 10.224.0.8 hostIPs: - ip: 10.224.0.8 initContainerStatuses: - containerID: containerd://c81b1814709c95a2e04ec8cb284cc62c8dc6e392545e8bdfdb2d33a89fea37e0 image: docker.io/apecloud/datasafed:0.2.3 imageID: docker.io/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f lastState: {} name: dp-copy-datasafed ready: true restartCount: 0 started: false state: terminated: containerID: containerd://c81b1814709c95a2e04ec8cb284cc62c8dc6e392545e8bdfdb2d33a89fea37e0 exitCode: 0 finishedAt: "2026-02-12T08:34:39Z" reason: Completed startedAt: "2026-02-12T08:34:39Z" volumeMounts: - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9pxpz readOnly: true recursiveReadOnly: Disabled phase: Failed podIP: 10.244.6.4 podIPs: - ip: 10.244.6.4 qosClass: BestEffort startTime: "2026-02-12T08:34:39Z" ------------------------------------------------------------------------------------------------------------------  `kubectl get pod restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc -o yaml --namespace ns-gtubu `(B  apiVersion: v1 kind: Pod metadata: annotations: dataprotection.kubeblocks.io/backup-extras: '[{"backup_name":"2026-02-12T08:26:14Z","backup_type":"physical","last_write_time":"2026-02-12T08:26:16Z"}]' dataprotection.kubeblocks.io/stop-restore-manager: "true" creationTimestamp: "2026-02-12T08:29:15Z" generateName: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0- labels: app.kubernetes.io/managed-by: kubeblocks-dataprotection batch.kubernetes.io/controller-uid: 7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 batch.kubernetes.io/job-name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 controller-uid: 7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 dataprotection.kubeblocks.io/restore: mongodb-trwkwn-backup-mongodb-9da370b0-postready job-name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc namespace: ns-gtubu ownerReferences: - apiVersion: batch/v1 blockOwnerDeletion: true controller: true kind: Job name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 uid: 7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 resourceVersion: "502012" uid: 5518b796-be0a-4432-8324-2c20e21e13f6 spec: containers: - command: - bash - -c - "#!/bin/bash\n# shellcheck disable=SC2086\n\nfunction handle_exit() {\n exit_code=$?\n \ if [ $exit_code -ne 0 ]; then\n echo \"failed with exit code $exit_code\"\n \ touch \"${DP_BACKUP_INFO_FILE}.exit\"\n exit 1\n fi\n}\n# log info file\nfunction DP_log() {\n msg=$1\n local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S')\n \ echo \"${curr_date} INFO: $msg\"\n}\n\n# log error info\nfunction DP_error_log() {\n msg=$1\n local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S')\n echo \"${curr_date} ERROR: $msg\"\n}\n\nfunction buildJsonString() {\n local jsonString=${1}\n \ local key=${2}\n local value=${3}\n if [ ! -z \"$jsonString\" ];then\n \ jsonString=\"${jsonString},\"\n fi\n echo \"${jsonString}\\\"${key}\\\":\\\"${value}\\\"\"\n}\n\n# Save backup status info file for syncing progress.\n# timeFormat: %Y-%m-%dT%H:%M:%SZ\nfunction DP_save_backup_status_info() {\n export PATH=\"$PATH:$DP_DATASAFED_BIN_PATH\"\n \ export DATASAFED_BACKEND_BASE_PATH=\"$DP_BACKUP_BASE_PATH\"\n \n local totalSize=$1\n local startTime=$2\n local stopTime=$3\n local timeZone=$4\n \ local extras=$5\n local timeZoneStr=\"\"\n if [ ! -z ${timeZone} ]; then\n timeZoneStr=\",\\\"timeZone\\\":\\\"${timeZone}\\\"\"\n fi\n \ if [ -z \"${stopTime}\" ];then\n echo \"{\\\"totalSize\\\":\\\"${totalSize}\\\"}\" > ${DP_BACKUP_INFO_FILE}\n elif [ -z \"${startTime}\" ];then\n echo \"{\\\"totalSize\\\":\\\"${totalSize}\\\",\\\"extras\\\":[${extras}],\\\"timeRange\\\":{\\\"end\\\":\\\"${stopTime}\\\"${timeZoneStr}}}\" > ${DP_BACKUP_INFO_FILE}\n else\n echo \"{\\\"totalSize\\\":\\\"${totalSize}\\\",\\\"extras\\\":[${extras}],\\\"timeRange\\\":{\\\"start\\\":\\\"${startTime}\\\",\\\"end\\\":\\\"${stopTime}\\\"${timeZoneStr}}}\" > ${DP_BACKUP_INFO_FILE}\n fi\n}\n\nfunction getToolConfigValue() {\n local var=$1\n cat \"$toolConfig\" | grep \"$var\" | awk '{print $NF}'\n}\n\nfunction set_backup_config_env() {\n toolConfig=/etc/datasafed/datasafed.conf\n if [ ! -f ${toolConfig} ]; then\n DP_error_log \"Config file not found: ${toolConfig}\"\n \ exit 1\n fi\n\n local provider=\"\"\n local access_key_id=\"\"\n local secret_access_key=\"\"\n local region=\"\"\n local endpoint=\"\"\n local bucket=\"\"\n\n IFS=$'\\n'\n for line in $(cat ${toolConfig}); do\n line=$(eval echo $line)\n if [[ $line == \"access_key_id\"* ]]; then\n access_key_id=$(getToolConfigValue \"$line\")\n elif [[ $line == \"secret_access_key\"* ]]; then\n secret_access_key=$(getToolConfigValue \"$line\")\n elif [[ $line == \"region\"* ]]; then\n region=$(getToolConfigValue \"$line\")\n elif [[ $line == \"endpoint\"* ]]; then\n endpoint=$(getToolConfigValue \"$line\")\n elif [[ $line == \"root\"* ]]; then\n bucket=$(getToolConfigValue \"$line\")\n elif [[ $line == \"provider\"* ]]; then\n provider=$(getToolConfigValue \"$line\")\n fi\n done\n\n if [[ ! $endpoint =~ ^https?:// ]]; then\n endpoint=\"https://${endpoint}\"\n \ fi\n\n if [[ \"$provider\" == \"Alibaba\" ]]; then\n regex='https?:\\/\\/oss-(.*?)\\.aliyuncs\\.com'\n \ if [[ \"$endpoint\" =~ $regex ]]; then\n region=\"${BASH_REMATCH[1]}\"\n \ DP_log \"Extract region from $endpoint-> $region\"\n else\n DP_log \"Failed to extract region from endpoint: $endpoint\"\n fi\n elif [[ \"$provider\" == \"TencentCOS\" ]]; then\n regex='https?:\\/\\/cos\\.(.*?)\\.myqcloud\\.com'\n \ if [[ \"$endpoint\" =~ $regex ]]; then\n region=\"${BASH_REMATCH[1]}\"\n \ DP_log \"Extract region from $endpoint-> $region\"\n else\n DP_log \"Failed to extract region from endpoint: $endpoint\"\n fi\n elif [[ \"$provider\" == \"Minio\" ]]; then\n export S3_FORCE_PATH_STYLE=\"true\"\n else\n echo \"Unsupported provider: $provider\"\n fi\n backup_path=$(dirname \"$DP_BACKUP_BASE_PATH\")\n\n \ export S3_ACCESS_KEY=\"${access_key_id}\"\n export S3_SECRET_KEY=\"${secret_access_key}\"\n \ export S3_REGION=\"${region}\"\n export S3_ENDPOINT=\"${endpoint}\"\n export S3_BUCKET=\"${bucket}\"\n export S3_PREFIX=\"${backup_path#/}/$PBM_BACKUP_DIR_NAME\"\n \ \n DP_log \"storage config have been extracted.\"\n}\n\n# config backup agent\ngenerate_endpoints() {\n local fqdns=$1\n local port=$2\n\n if [ -z \"$fqdns\" ]; then\n \ echo \"ERROR: No FQDNs provided for endpoints.\" >&2\n exit 1\n \ fi\n\n IFS=',' read -ra fqdn_array <<< \"$fqdns\"\n local endpoints=()\n\n \ for fqdn in \"${fqdn_array[@]}\"; do\n trimmed_fqdn=$(echo \"$fqdn\" | xargs)\n if [[ -n \"$trimmed_fqdn\" ]]; then\n endpoints+=(\"${trimmed_fqdn}:${port}\")\n \ fi\n done\n\n IFS=','; echo \"${endpoints[*]}\"\n}\n\nfunction export_pbm_env_vars() {\n export PBM_AGENT_MONGODB_USERNAME=\"$MONGODB_USER\"\n \ export PBM_AGENT_MONGODB_PASSWORD=\"$MONGODB_PASSWORD\"\n \n cfg_server_endpoints=\"$(generate_endpoints \"$CFG_SERVER_POD_FQDN_LIST\" \"$CFG_SERVER_INTERNAL_PORT\")\"\n export PBM_MONGODB_URI=\"mongodb://$PBM_AGENT_MONGODB_USERNAME:$PBM_AGENT_MONGODB_PASSWORD@$cfg_server_endpoints/?authSource=admin&replSetName=$CFG_SERVER_REPLICA_SET_NAME\"\n}\n\nfunction export_pbm_env_vars_for_rs() {\n export PBM_AGENT_MONGODB_USERNAME=\"$MONGODB_USER\"\n \ export PBM_AGENT_MONGODB_PASSWORD=\"$MONGODB_PASSWORD\"\n\n mongodb_endpoints=\"$(generate_endpoints \"$MONGODB_POD_FQDN_LIST\" \"$KB_SERVICE_PORT\")\"\n export PBM_MONGODB_URI=\"mongodb://$PBM_AGENT_MONGODB_USERNAME:$PBM_AGENT_MONGODB_PASSWORD@$mongodb_endpoints/?authSource=admin&replSetName=$MONGODB_REPLICA_SET_NAME\"\n}\n\nfunction sync_pbm_storage_config() {\n echo \"INFO: Checking if PBM storage config exists\"\n \ pbm_config_exists=true\n check_config=$(pbm config --mongodb-uri \"$PBM_MONGODB_URI\" -o json) || {\n pbm_config_exists=false\n echo \"INFO: PBM storage config does not exist.\"\n }\n if [ \"$pbm_config_exists\" = \"true\" ]; then\n # check_config=$(pbm config --mongodb-uri \"$PBM_MONGODB_URI\" -o json)\n current_endpoint=$(echo \"$check_config\" | jq -r '.storage.s3.endpointUrl')\n current_region=$(echo \"$check_config\" | jq -r '.storage.s3.region')\n current_bucket=$(echo \"$check_config\" | jq -r '.storage.s3.bucket')\n current_prefix=$(echo \"$check_config\" | jq -r '.storage.s3.prefix')\n echo \"INFO: Current PBM storage endpoint: $current_endpoint\"\n echo \"INFO: Current PBM storage region: $current_region\"\n \ echo \"INFO: Current PBM storage bucket: $current_bucket\"\n echo \"INFO: Current PBM storage prefix: $current_prefix\"\n if [ \"$current_prefix\" = \"$S3_PREFIX\" ] && [ \"$current_region\" = \"$S3_REGION\" ] && [ \"$current_bucket\" = \"$S3_BUCKET\" ] && [ \"$current_endpoint\" = \"$S3_ENDPOINT\" ]; then\n echo \"INFO: PBM storage config already exists.\"\n else\n pbm_config_exists=false\n \ fi\n fi\n if [ \"$pbm_config_exists\" = \"false\" ]; then\n cat < /dev/null\nstorage:\n \ type: s3\n s3:\n region: ${S3_REGION}\n bucket: ${S3_BUCKET}\n prefix: ${S3_PREFIX}\n endpointUrl: ${S3_ENDPOINT}\n forcePathStyle: ${S3_FORCE_PATH_STYLE:-false}\n \ credentials:\n access-key-id: ${S3_ACCESS_KEY}\n secret-access-key: ${S3_SECRET_KEY}\nrestore:\n numDownloadWorkers: ${PBM_RESTORE_DOWNLOAD_WORKERS:-4}\nbackup:\n \ timeouts:\n startingStatus: 60\nEOF\n sleep 5\n echo \"INFO: PBM storage configuration completed.\"\n fi\n}\n\nfunction print_pbm_logs_by_event() {\n local pbm_event=$1\n # echo \"INFO: Printing PBM logs by event: $pbm_event\"\n \ # shellcheck disable=SC2328\n local pbm_logs=$(pbm logs -e $pbm_event --tail 200 --mongodb-uri \"$PBM_MONGODB_URI\" > /dev/null)\n local purged_logs=$(echo \"$pbm_logs\" | awk -v start=\"$PBM_LOGS_START_TIME\" '$1 >= start')\n if [ -z \"$purged_logs\" ]; then\n return\n fi\n echo \"$purged_logs\"\n # echo \"INFO: PBM logs by event: $pbm_event printed.\"\n}\n\nfunction print_pbm_tail_logs() {\n echo \"INFO: Printing PBM tail logs\"\n pbm logs --tail 20 --mongodb-uri \"$PBM_MONGODB_URI\"\n}\n\nfunction handle_backup_exit() {\n exit_code=$?\n \ set +e\n if [ $exit_code -ne 0 ]; then\n print_pbm_tail_logs\n\n echo \"failed with exit code $exit_code\"\n touch \"${DP_BACKUP_INFO_FILE}.exit\"\n \ exit 1\n fi\n}\n\nfunction handle_restore_exit() {\n exit_code=$?\n set +e\n if [ $exit_code -ne 0 ]; then\n print_pbm_tail_logs\n\n echo \"failed with exit code $exit_code\"\n exit 1\n fi\n}\n\nfunction handle_pitr_exit() {\n exit_code=$?\n set +e\n if [[ \"$PBM_DISABLE_PITR_WHEN_EXIT\" == \"true\" ]]; then\n disable_pitr\n fi\n\n if [ $exit_code -ne 0 ]; then\n print_pbm_tail_logs\n\n \ echo \"failed with exit code $exit_code\"\n touch \"${DP_BACKUP_INFO_FILE}.exit\"\n \ exit 1\n fi\n}\n\nfunction wait_for_other_operations() {\n status_result=$(pbm status --mongodb-uri \"$PBM_MONGODB_URI\" -o json) || {\n echo \"INFO: PBM is not configured.\"\n return\n }\n local except_type=$1\n local running_status=$(echo \"$status_result\" | jq -r '.running')\n local retry_count=0\n local max_retries=60\n \ while [ -n \"$running_status\" ] && [ \"$running_status\" != \"{}\" ] && [ $retry_count -lt $max_retries ]; do\n retry_count=$((retry_count+1))\n local running_type=$(echo \"$running_status\" | jq -r '.type')\n if [ -n \"$running_type\" ] && [ \"$running_type\" = \"$except_type\" ]; then\n break\n fi\n echo \"INFO: Other operation $running_type is running, waiting... ($retry_count/$max_retries)\"\n \ sleep 5\n running_status=$(pbm status --mongodb-uri \"$PBM_MONGODB_URI\" -o json | jq -r '.running')\n done\n if [ $retry_count -ge $max_retries ]; then\n echo \"ERROR: Other operations are still running after $max_retries retries\"\n exit 1\n fi\n}\n\nfunction export_logs_start_time_env() {\n \ local logs_start_time=$(date +\"%Y-%m-%dT%H:%M:%SZ\")\n export PBM_LOGS_START_TIME=\"${logs_start_time}\"\n}\n\nfunction sync_pbm_config_from_storage() {\n echo \"INFO: Syncing PBM config from storage...\"\n\n \ wait_for_other_operations\n\n pbm config --force-resync --mongodb-uri \"$PBM_MONGODB_URI\"\n \ # print_pbm_logs_by_event \"resync\"\n \n # resync wait flag might don't work\n wait_for_other_operations\n\n echo \"INFO: PBM config synced from storage.\"\n}\n\nfunction wait_for_backup_completion() {\n describe_result=\"\"\n local retry_interval=5\n \ local attempt=1\n local max_retries=12\n set +e\n while true; do\n describe_result=$(pbm describe-backup --mongodb-uri \"$PBM_MONGODB_URI\" \"$backup_name\" -o json 2>&1)\n if [ $? -eq 0 ] && [ -n \"$describe_result\" ]; then\n backup_status=$(echo \"$describe_result\" | jq -r '.status')\n if [ \"$backup_status\" = \"starting\" ] || [ \"$backup_status\" = \"running\" ]; then\n echo \"INFO: Backup status is $backup_status, retrying in ${retry_interval}s...\"\n elif [ \"$backup_status\" = \"\" ]; then\n echo \"INFO: Backup status is $backup_status, retrying in ${retry_interval}s...\"\n attempt=$((attempt+1))\n elif [ \"$backup_status\" = \"done\" ]; then\n echo \"INFO: Backup status is done.\"\n break\n else\n echo \"ERROR: Backup failed with status: $backup_status\"\n exit 1\n fi\n elif echo \"$describe_result\" | grep -q \"not found\"; then\n echo \"INFO: Backup metadata not found, retrying in ${retry_interval}s...\"\n attempt=$((attempt+1))\n else\n \ echo \"ERROR: Unexpected: $describe_result\"\n exit 1\n fi\n sleep $retry_interval\n if [ $attempt -gt $max_retries ]; then\n echo \"ERROR: Failed to get backup status after $max_retries attempts\"\n exit 1\n fi\n \ done\n set -e\n\n backup_status=$(echo \"$describe_result\" | jq -r '.status')\n \ if [ \"$backup_status\" != \"done\" ]; then\n echo \"ERROR: Backup did not complete successfully, final status: $backup_status\"\n exit 1\n fi\n}\n\nfunction create_restore_signal() {\n phase=$1\n kubectl apply -f - <&1)\n kubectl_get_exit_code=$?\n set -e\n # Wait for the restore signal ConfigMap to be created or updated\n if [[ \"$kubectl_get_exit_code\" -ne 0 ]]; then\n if [[ \"$kubectl_get_result\" == *\"not found\"* ]]; then\n create_restore_signal \"start\"\n fi\n \ else\n annotation_value=$(echo \"$kubectl_get_result\" | jq -r '.metadata.labels[\"apps.kubeblocks.io/restore-mongodb-shard\"] // empty')\n \ if [[ \"$annotation_value\" == \"start\" ]]; then\n break\n \ elif [[ \"$annotation_value\" == \"end\" ]]; then\n echo \"INFO: Restore completed, exiting.\"\n exit 0\n else\n \ echo \"INFO: Restore start signal is $annotation_value, updating...\"\n \ create_restore_signal \"start\"\n fi\n fi\n \ sleep 1\n done\n sleep 5\n echo \"INFO: Prepare restore start signal completed.\"\n}\n\nfunction process_restore_end_signal() {\n echo \"INFO: Waiting for prepare restore end signal...\"\n sleep 5\n dp_cm_name=\"$CLUSTER_NAME-restore-signal\"\n \ dp_cm_namespace=\"$CLUSTER_NAMESPACE\"\n while true; do\n set +e\n kubectl_get_result=$(kubectl get configmap $dp_cm_name -n $dp_cm_namespace -o json 2>&1)\n kubectl_get_exit_code=$?\n set -e\n # Wait for the restore signal ConfigMap to be created or updated\n if [[ \"$kubectl_get_exit_code\" -ne 0 ]]; then\n if [[ \"$kubectl_get_result\" == *\"not found\"* ]]; then\n create_restore_signal \"end\"\n fi\n else\n \ annotation_value=$(echo \"$kubectl_get_result\" | jq -r '.metadata.labels[\"apps.kubeblocks.io/restore-mongodb-shard\"] // empty')\n if [[ \"$annotation_value\" == \"end\" ]]; then\n break\n \ else\n echo \"INFO: Restore end signal is $annotation_value, updating...\"\n create_restore_signal \"end\"\n fi\n \ fi\n sleep 1\n done\n echo \"INFO: Prepare restore end signal completed.\"\n}\n\nfunction get_describe_backup_info() {\n describe_result=\"\"\n \ local max_retries=60\n local retry_interval=5\n local attempt=1\n set +e\n \ while [ $attempt -le $max_retries ]; do\n describe_result=$(pbm describe-backup --mongodb-uri \"$PBM_MONGODB_URI\" \"$backup_name\" -o json 2>&1)\n if [ $? -eq 0 ] && [ -n \"$describe_result\" ]; then\n break\n elif echo \"$describe_result\" | grep -q \"not found\"; then\n echo \"INFO: Attempt $attempt: backup $backup_name not found, retrying in ${retry_interval}s...\"\n \ if [ $((attempt % 30)) -eq 29 ]; then\n echo \"INFO: Sync PBM config from storage again.\"\n sync_pbm_config_from_storage\n \ fi\n sleep $retry_interval\n ((attempt++))\n continue\n \ else\n echo \"ERROR: Failed to get backup metadata: $describe_result\"\n \ exit 1\n fi\n done\n set -e\n\n if [ -z \"$describe_result\" ] || echo \"$describe_result\" | grep -q \"not found\"; then\n echo \"ERROR: Failed to get backup metadata after $max_retries attempts\"\n exit 1\n \ fi\n}\n\nfunction wait_for_restoring() {\n local cnf_file=\"${MOUNT_DIR}/tmp/pbm_restore.cnf\"\n \ cat < ${MOUNT_DIR}/tmp/pbm_restore.cnf\nstorage:\n type: s3\n s3:\n \ region: ${S3_REGION}\n bucket: ${S3_BUCKET}\n prefix: ${S3_PREFIX}\n \ endpointUrl: ${S3_ENDPOINT}\n forcePathStyle: ${S3_FORCE_PATH_STYLE:-false}\n \ credentials:\n access-key-id: ${S3_ACCESS_KEY}\n secret-access-key: ${S3_SECRET_KEY}\nEOF\n local attempt=0\n local max_retries=12\n local try_interval=5\n \ while true; do\n restore_status=$(pbm describe-restore \"$restore_name\" -c $cnf_file -o json | jq -r '.status') \n echo \"INFO: Restore $restore_name status: $restore_status, retrying in ${try_interval}s...\"\n if [ \"$restore_status\" = \"done\" ]; then\n rm $cnf_file\n break\n elif [ \"$restore_status\" = \"starting\" ] || [ \"$restore_status\" = \"running\" ]; then\n sleep $try_interval\n elif [ \"$restore_status\" = \"\" ]; then\n sleep $try_interval\n \ attempt=$((attempt+1))\n if [ $attempt -gt $max_retries ]; then\n \ echo \"ERROR: Restore $restore_name status is still empty after $max_retries retries\"\n rm $cnf_file\n exit 1\n fi\n else\n rm $cnf_file\n exit 1\n fi\n done\n}\n#!/bin/bash\nset -e\nset -o pipefail\nexport PATH=\"$PATH:$DP_DATASAFED_BIN_PATH:$MOUNT_DIR/tmp/bin\"\nexport DATASAFED_BACKEND_BASE_PATH=\"$DP_BACKUP_BASE_PATH\"\n\nexport_pbm_env_vars_for_rs\n\nset_backup_config_env\n\nexport_logs_start_time_env\n\ntrap handle_restore_exit EXIT\n\nwait_for_other_operations\n\nsync_pbm_storage_config\n\nsync_pbm_config_from_storage\n\nextras=$(cat /dp_downward/status_extras)\nbackup_name=$(echo \"$extras\" | jq -r '.[0].backup_name')\nbackup_type=$(echo \"$extras\" | jq -r '.[0].backup_type')\n\nif [ -z \"$backup_type\" ] || [ -z \"$backup_name\" ]; then\n echo \"ERROR: Backup type or backup name is empty, skip restore.\"\n exit 1\nfi\n\nget_describe_backup_info\n\nrs_name=$(echo \"$describe_result\" | jq -r '.replsets[0].name')\nmappings=\"$MONGODB_REPLICA_SET_NAME=$rs_name\"\necho \"INFO: Replica set mappings: $mappings\"\n\nprocess_restore_start_signal\n\nwait_for_other_operations\n\nrestore_name=$(pbm restore $backup_name --mongodb-uri \"$PBM_MONGODB_URI\" --replset-remapping \"$mappings\" -o json | jq -r '.name')\n\nwait_for_restoring\n\nprocess_restore_end_signal\n" env: - name: DP_BACKUP_NAME value: backup-ns-gtubu-mongodb-trwkwn-20260212162606 - name: DP_TARGET_RELATIVE_PATH - name: DP_BACKUP_ROOT_PATH value: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb - name: DP_BACKUP_BASE_PATH value: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212162606 - name: DP_BACKUP_STOP_TIME value: "2026-02-12T08:26:18Z" - name: DATA_DIR value: /data/mongodb/db - name: MOUNT_DIR value: /data/mongodb - name: PBM_BACKUP_DIR_NAME value: pbm-backups - name: PBM_BACKUP_TYPE value: physical - name: PBM_COMPRESSION value: s2 - name: PBM_RESTORE_DOWNLOAD_WORKERS value: "4" - name: PBM_IMAGE_TAG value: 2.12.0 - name: PSM_IMAGE_TAG value: 8.0.17 - name: MONGODB_USER valueFrom: secretKeyRef: key: username name: mongodb-trwkwn-backup-mongodb-account-root - name: MONGODB_PASSWORD valueFrom: secretKeyRef: key: password name: mongodb-trwkwn-backup-mongodb-account-root - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: PATH value: /tools/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - name: KB_SERVICE_CHARACTER_TYPE value: mongodb - name: SERVICE_PORT value: $(KB_SERVICE_PORT) - name: MONGODB_ROOT_USER value: $(MONGODB_USER) - name: MONGODB_ROOT_PASSWORD value: $(MONGODB_PASSWORD) - name: MONGODB_GRANT_ANYACTION_PRIVILEGE value: "true" - name: PBM_AGENT_MONGODB_USERNAME value: $(MONGODB_USER) - name: PBM_AGENT_MONGODB_PASSWORD value: $(MONGODB_PASSWORD) - name: PBM_MONGODB_REPLICA_SET value: $(KB_CLUSTER_COMP_NAME) - name: PBM_AGENT_SIDECAR value: "true" - name: PBM_AGENT_SIDECAR_SLEEP value: "5" - name: PBM_MONGODB_URI value: mongodb://$(PBM_AGENT_MONGODB_USERNAME):$(PBM_AGENT_MONGODB_PASSWORD)@localhost:$(KB_SERVICE_PORT)/?authSource=admin - name: KB_POD_FQDN value: $(POD_NAME).$(CLUSTER_COMPONENT_NAME)-headless.$(CLUSTER_NAMESPACE).svc - name: DP_DB_USER valueFrom: secretKeyRef: key: username name: mongodb-trwkwn-backup-mongodb-account-root - name: DP_DB_PASSWORD valueFrom: secretKeyRef: key: password name: mongodb-trwkwn-backup-mongodb-account-root - name: DP_DB_PORT value: "27017" - name: DP_DB_HOST value: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless - name: DP_DATASAFED_BIN_PATH value: /bin/datasafed envFrom: - configMapRef: name: mongodb-trwkwn-backup-mongodb-env optional: false image: docker.io/apecloud/percona-backup-mongodb:2.12.0 imagePullPolicy: IfNotPresent name: restore resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /data/mongodb name: data - mountPath: /dp_downward/ name: downward-volume - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-bh5h7 readOnly: true - args: - |2 set -o errexit set -o nounset sleep_seconds="1" signal_file="/dp_downward/stop_restore_manager" if [ "$sleep_seconds" -le 0 ]; then sleep_seconds=2 fi while true; do if [ -f "$signal_file" ] && [ "$(cat "$signal_file")" = "true" ]; then break fi echo "waiting for other restore workloads, sleep ${sleep_seconds}s" sleep "$sleep_seconds" done echo "restore manager stopped" command: - sh - -c env: - name: DP_DATASAFED_BIN_PATH value: /bin/datasafed image: docker.io/apecloud/kubeblocks-tools:1.0.2 imagePullPolicy: IfNotPresent name: restore-manager resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /dp_downward name: downward-volume-sidecard - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-bh5h7 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: kbcli-test-registry-key initContainers: - command: - /bin/sh - -c - /scripts/install-datasafed.sh /bin/datasafed image: docker.io/apecloud/datasafed:0.2.3 imagePullPolicy: IfNotPresent name: dp-copy-datasafed resources: limits: cpu: "0" memory: "0" requests: cpu: "0" memory: "0" securityContext: allowPrivilegeEscalation: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-bh5h7 readOnly: true nodeName: aks-cicdamdpool-14916756-vmss000002 nodeSelector: kubernetes.io/hostname: aks-cicdamdpool-14916756-vmss000002 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: runAsUser: 0 serviceAccount: kubeblocks-dataprotection-worker serviceAccountName: kubeblocks-dataprotection-worker terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: data persistentVolumeClaim: claimName: data-mongodb-trwkwn-backup-mongodb-0 - downwardAPI: defaultMode: 420 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['dataprotection.kubeblocks.io/backup-extras'] path: status_extras name: downward-volume - downwardAPI: defaultMode: 420 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['dataprotection.kubeblocks.io/stop-restore-manager'] path: stop_restore_manager name: downward-volume-sidecard - name: dp-datasafed-config secret: defaultMode: 420 secretName: tool-config-backuprepo-kbcli-test-4fs2t9 - emptyDir: {} name: dp-datasafed-bin - name: kube-api-access-bh5h7 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-02-12T08:34:32Z" status: "False" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-02-12T08:29:18Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-02-12T08:34:28Z" reason: PodFailed status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2026-02-12T08:34:28Z" reason: PodFailed status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-02-12T08:29:15Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://11d39685b9b7f9cdd04e1f84ed2dab5ca8e2c4420216b56d49b6e3f072d6713e image: docker.io/apecloud/percona-backup-mongodb:2.12.0 imageID: docker.io/apecloud/percona-backup-mongodb@sha256:16a65d6189650fa7c2bb8de02064fa94fed63c38665f67dff7d7355b66bd144d lastState: {} name: restore ready: false restartCount: 0 started: false state: terminated: containerID: containerd://11d39685b9b7f9cdd04e1f84ed2dab5ca8e2c4420216b56d49b6e3f072d6713e exitCode: 1 finishedAt: "2026-02-12T08:34:27Z" reason: Error startedAt: "2026-02-12T08:29:18Z" volumeMounts: - mountPath: /data/mongodb name: data - mountPath: /dp_downward/ name: downward-volume - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true recursiveReadOnly: Disabled - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-bh5h7 readOnly: true recursiveReadOnly: Disabled - containerID: containerd://07fc6a445c980e11402cb53e1b324fdad0b980c5580e9789b87c45353bbd93a8 image: docker.io/apecloud/kubeblocks-tools:1.0.2 imageID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea lastState: {} name: restore-manager ready: false restartCount: 0 started: false state: terminated: containerID: containerd://07fc6a445c980e11402cb53e1b324fdad0b980c5580e9789b87c45353bbd93a8 exitCode: 0 finishedAt: "2026-02-12T08:34:29Z" reason: Completed startedAt: "2026-02-12T08:29:18Z" volumeMounts: - mountPath: /dp_downward name: downward-volume-sidecard - mountPath: /etc/datasafed name: dp-datasafed-config readOnly: true recursiveReadOnly: Disabled - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-bh5h7 readOnly: true recursiveReadOnly: Disabled hostIP: 10.224.0.8 hostIPs: - ip: 10.224.0.8 initContainerStatuses: - containerID: containerd://d24964e3510167658b9e4fe180b19b148fbe210330619ee077ddb94ec4bce1ec image: docker.io/apecloud/datasafed:0.2.3 imageID: docker.io/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f lastState: {} name: dp-copy-datasafed ready: true restartCount: 0 started: false state: terminated: containerID: containerd://d24964e3510167658b9e4fe180b19b148fbe210330619ee077ddb94ec4bce1ec exitCode: 0 finishedAt: "2026-02-12T08:29:16Z" reason: Completed startedAt: "2026-02-12T08:29:16Z" volumeMounts: - mountPath: /bin/datasafed name: dp-datasafed-bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-bh5h7 readOnly: true recursiveReadOnly: Disabled phase: Failed podIP: 10.244.6.131 podIPs: - ip: 10.244.6.131 qosClass: BestEffort startTime: "2026-02-12T08:29:15Z" ------------------------------------------------------------------------------------------------------------------ --------------------------------------describe pod restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc--------------------------------------  `kubectl describe pod restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 --namespace ns-gtubu `(B  Name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 Namespace: ns-gtubu Priority: 0 Service Account: kubeblocks-dataprotection-worker Node: aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Start Time: Thu, 12 Feb 2026 16:34:39 +0800 Labels: app.kubernetes.io/managed-by=kubeblocks-dataprotection batch.kubernetes.io/controller-uid=7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 batch.kubernetes.io/job-name=restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 controller-uid=7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 dataprotection.kubeblocks.io/restore=mongodb-trwkwn-backup-mongodb-9da370b0-postready job-name=restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 Annotations: dataprotection.kubeblocks.io/backup-extras: [{"backup_name":"2026-02-12T08:26:14Z","backup_type":"physical","last_write_time":"2026-02-12T08:26:16Z"}] dataprotection.kubeblocks.io/stop-restore-manager: true Status: Failed IP: 10.244.6.4 IPs: IP: 10.244.6.4 Controlled By: Job/restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 Init Containers: dp-copy-datasafed: Container ID: containerd://c81b1814709c95a2e04ec8cb284cc62c8dc6e392545e8bdfdb2d33a89fea37e0 Image: docker.io/apecloud/datasafed:0.2.3 Image ID: docker.io/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f Port: Host Port: Command: /bin/sh -c /scripts/install-datasafed.sh /bin/datasafed State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 16:34:39 +0800 Finished: Thu, 12 Feb 2026 16:34:39 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment: Mounts: /bin/datasafed from dp-datasafed-bin (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9pxpz (ro) Containers: restore: Container ID: containerd://3611fad73a5e87bd245d835c37578806188a21f9e700aa7f560f3b745b35ae86 Image: docker.io/apecloud/percona-backup-mongodb:2.12.0 Image ID: docker.io/apecloud/percona-backup-mongodb@sha256:16a65d6189650fa7c2bb8de02064fa94fed63c38665f67dff7d7355b66bd144d Port: Host Port: Command: bash -c #!/bin/bash # shellcheck disable=SC2086 function handle_exit() { exit_code=$? if [ $exit_code -ne 0 ]; then echo "failed with exit code $exit_code" touch "${DP_BACKUP_INFO_FILE}.exit" exit 1 fi } # log info file function DP_log() { msg=$1 local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S') echo "${curr_date} INFO: $msg" } # log error info function DP_error_log() { msg=$1 local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S') echo "${curr_date} ERROR: $msg" } function buildJsonString() { local jsonString=${1} local key=${2} local value=${3} if [ ! -z "$jsonString" ];then jsonString="${jsonString}," fi echo "${jsonString}\"${key}\":\"${value}\"" } # Save backup status info file for syncing progress. # timeFormat: %Y-%m-%dT%H:%M:%SZ function DP_save_backup_status_info() { export PATH="$PATH:$DP_DATASAFED_BIN_PATH" export DATASAFED_BACKEND_BASE_PATH="$DP_BACKUP_BASE_PATH" local totalSize=$1 local startTime=$2 local stopTime=$3 local timeZone=$4 local extras=$5 local timeZoneStr="" if [ ! -z ${timeZone} ]; then timeZoneStr=",\"timeZone\":\"${timeZone}\"" fi if [ -z "${stopTime}" ];then echo "{\"totalSize\":\"${totalSize}\"}" > ${DP_BACKUP_INFO_FILE} elif [ -z "${startTime}" ];then echo "{\"totalSize\":\"${totalSize}\",\"extras\":[${extras}],\"timeRange\":{\"end\":\"${stopTime}\"${timeZoneStr}}}" > ${DP_BACKUP_INFO_FILE} else echo "{\"totalSize\":\"${totalSize}\",\"extras\":[${extras}],\"timeRange\":{\"start\":\"${startTime}\",\"end\":\"${stopTime}\"${timeZoneStr}}}" > ${DP_BACKUP_INFO_FILE} fi } function getToolConfigValue() { local var=$1 cat "$toolConfig" | grep "$var" | awk '{print $NF}' } function set_backup_config_env() { toolConfig=/etc/datasafed/datasafed.conf if [ ! -f ${toolConfig} ]; then DP_error_log "Config file not found: ${toolConfig}" exit 1 fi local provider="" local access_key_id="" local secret_access_key="" local region="" local endpoint="" local bucket="" IFS=$'\n' for line in $(cat ${toolConfig}); do line=$(eval echo $line) if [[ $line == "access_key_id"* ]]; then access_key_id=$(getToolConfigValue "$line") elif [[ $line == "secret_access_key"* ]]; then secret_access_key=$(getToolConfigValue "$line") elif [[ $line == "region"* ]]; then region=$(getToolConfigValue "$line") elif [[ $line == "endpoint"* ]]; then endpoint=$(getToolConfigValue "$line") elif [[ $line == "root"* ]]; then bucket=$(getToolConfigValue "$line") elif [[ $line == "provider"* ]]; then provider=$(getToolConfigValue "$line") fi done if [[ ! $endpoint =~ ^https?:// ]]; then endpoint="https://${endpoint}" fi if [[ "$provider" == "Alibaba" ]]; then regex='https?:\/\/oss-(.*?)\.aliyuncs\.com' if [[ "$endpoint" =~ $regex ]]; then region="${BASH_REMATCH[1]}" DP_log "Extract region from $endpoint-> $region" else DP_log "Failed to extract region from endpoint: $endpoint" fi elif [[ "$provider" == "TencentCOS" ]]; then regex='https?:\/\/cos\.(.*?)\.myqcloud\.com' if [[ "$endpoint" =~ $regex ]]; then region="${BASH_REMATCH[1]}" DP_log "Extract region from $endpoint-> $region" else DP_log "Failed to extract region from endpoint: $endpoint" fi elif [[ "$provider" == "Minio" ]]; then export S3_FORCE_PATH_STYLE="true" else echo "Unsupported provider: $provider" fi backup_path=$(dirname "$DP_BACKUP_BASE_PATH") export S3_ACCESS_KEY="${access_key_id}" export S3_SECRET_KEY="${secret_access_key}" export S3_REGION="${region}" export S3_ENDPOINT="${endpoint}" export S3_BUCKET="${bucket}" export S3_PREFIX="${backup_path#/}/$PBM_BACKUP_DIR_NAME" DP_log "storage config have been extracted." } # config backup agent generate_endpoints() { local fqdns=$1 local port=$2 if [ -z "$fqdns" ]; then echo "ERROR: No FQDNs provided for endpoints." >&2 exit 1 fi IFS=',' read -ra fqdn_array <<< "$fqdns" local endpoints=() for fqdn in "${fqdn_array[@]}"; do trimmed_fqdn=$(echo "$fqdn" | xargs) if [[ -n "$trimmed_fqdn" ]]; then endpoints+=("${trimmed_fqdn}:${port}") fi done IFS=','; echo "${endpoints[*]}" } function export_pbm_env_vars() { export PBM_AGENT_MONGODB_USERNAME="$MONGODB_USER" export PBM_AGENT_MONGODB_PASSWORD="$MONGODB_PASSWORD" cfg_server_endpoints="$(generate_endpoints "$CFG_SERVER_POD_FQDN_LIST" "$CFG_SERVER_INTERNAL_PORT")" export PBM_MONGODB_URI="mongodb://$PBM_AGENT_MONGODB_USERNAME:$PBM_AGENT_MONGODB_PASSWORD@$cfg_server_endpoints/?authSource=admin&replSetName=$CFG_SERVER_REPLICA_SET_NAME" } function export_pbm_env_vars_for_rs() { export PBM_AGENT_MONGODB_USERNAME="$MONGODB_USER" export PBM_AGENT_MONGODB_PASSWORD="$MONGODB_PASSWORD" mongodb_endpoints="$(generate_endpoints "$MONGODB_POD_FQDN_LIST" "$KB_SERVICE_PORT")" export PBM_MONGODB_URI="mongodb://$PBM_AGENT_MONGODB_USERNAME:$PBM_AGENT_MONGODB_PASSWORD@$mongodb_endpoints/?authSource=admin&replSetName=$MONGODB_REPLICA_SET_NAME" } function sync_pbm_storage_config() { echo "INFO: Checking if PBM storage config exists" pbm_config_exists=true check_config=$(pbm config --mongodb-uri "$PBM_MONGODB_URI" -o json) || { pbm_config_exists=false echo "INFO: PBM storage config does not exist." } if [ "$pbm_config_exists" = "true" ]; then # check_config=$(pbm config --mongodb-uri "$PBM_MONGODB_URI" -o json) current_endpoint=$(echo "$check_config" | jq -r '.storage.s3.endpointUrl') current_region=$(echo "$check_config" | jq -r '.storage.s3.region') current_bucket=$(echo "$check_config" | jq -r '.storage.s3.bucket') current_prefix=$(echo "$check_config" | jq -r '.storage.s3.prefix') echo "INFO: Current PBM storage endpoint: $current_endpoint" echo "INFO: Current PBM storage region: $current_region" echo "INFO: Current PBM storage bucket: $current_bucket" echo "INFO: Current PBM storage prefix: $current_prefix" if [ "$current_prefix" = "$S3_PREFIX" ] && [ "$current_region" = "$S3_REGION" ] && [ "$current_bucket" = "$S3_BUCKET" ] && [ "$current_endpoint" = "$S3_ENDPOINT" ]; then echo "INFO: PBM storage config already exists." else pbm_config_exists=false fi fi if [ "$pbm_config_exists" = "false" ]; then cat < /dev/null storage: type: s3 s3: region: ${S3_REGION} bucket: ${S3_BUCKET} prefix: ${S3_PREFIX} endpointUrl: ${S3_ENDPOINT} forcePathStyle: ${S3_FORCE_PATH_STYLE:-false} credentials: access-key-id: ${S3_ACCESS_KEY} secret-access-key: ${S3_SECRET_KEY} restore: numDownloadWorkers: ${PBM_RESTORE_DOWNLOAD_WORKERS:-4} backup: timeouts: startingStatus: 60 EOF sleep 5 echo "INFO: PBM storage configuration completed." fi } function print_pbm_logs_by_event() { local pbm_event=$1 # echo "INFO: Printing PBM logs by event: $pbm_event" # shellcheck disable=SC2328 local pbm_logs=$(pbm logs -e $pbm_event --tail 200 --mongodb-uri "$PBM_MONGODB_URI" > /dev/null) local purged_logs=$(echo "$pbm_logs" | awk -v start="$PBM_LOGS_START_TIME" '$1 >= start') if [ -z "$purged_logs" ]; then return fi echo "$purged_logs" # echo "INFO: PBM logs by event: $pbm_event printed." } function print_pbm_tail_logs() { echo "INFO: Printing PBM tail logs" pbm logs --tail 20 --mongodb-uri "$PBM_MONGODB_URI" } function handle_backup_exit() { exit_code=$? set +e if [ $exit_code -ne 0 ]; then print_pbm_tail_logs echo "failed with exit code $exit_code" touch "${DP_BACKUP_INFO_FILE}.exit" exit 1 fi } function handle_restore_exit() { exit_code=$? set +e if [ $exit_code -ne 0 ]; then print_pbm_tail_logs echo "failed with exit code $exit_code" exit 1 fi } function handle_pitr_exit() { exit_code=$? set +e if [[ "$PBM_DISABLE_PITR_WHEN_EXIT" == "true" ]]; then disable_pitr fi if [ $exit_code -ne 0 ]; then print_pbm_tail_logs echo "failed with exit code $exit_code" touch "${DP_BACKUP_INFO_FILE}.exit" exit 1 fi } function wait_for_other_operations() { status_result=$(pbm status --mongodb-uri "$PBM_MONGODB_URI" -o json) || { echo "INFO: PBM is not configured." return } local except_type=$1 local running_status=$(echo "$status_result" | jq -r '.running') local retry_count=0 local max_retries=60 while [ -n "$running_status" ] && [ "$running_status" != "{}" ] && [ $retry_count -lt $max_retries ]; do retry_count=$((retry_count+1)) local running_type=$(echo "$running_status" | jq -r '.type') if [ -n "$running_type" ] && [ "$running_type" = "$except_type" ]; then break fi echo "INFO: Other operation $running_type is running, waiting... ($retry_count/$max_retries)" sleep 5 running_status=$(pbm status --mongodb-uri "$PBM_MONGODB_URI" -o json | jq -r '.running') done if [ $retry_count -ge $max_retries ]; then echo "ERROR: Other operations are still running after $max_retries retries" exit 1 fi } function export_logs_start_time_env() { local logs_start_time=$(date +"%Y-%m-%dT%H:%M:%SZ") export PBM_LOGS_START_TIME="${logs_start_time}" } function sync_pbm_config_from_storage() { echo "INFO: Syncing PBM config from storage..." wait_for_other_operations pbm config --force-resync --mongodb-uri "$PBM_MONGODB_URI" # print_pbm_logs_by_event "resync" # resync wait flag might don't work wait_for_other_operations echo "INFO: PBM config synced from storage." } function wait_for_backup_completion() { describe_result="" local retry_interval=5 local attempt=1 local max_retries=12 set +e while true; do describe_result=$(pbm describe-backup --mongodb-uri "$PBM_MONGODB_URI" "$backup_name" -o json 2>&1) if [ $? -eq 0 ] && [ -n "$describe_result" ]; then backup_status=$(echo "$describe_result" | jq -r '.status') if [ "$backup_status" = "starting" ] || [ "$backup_status" = "running" ]; then echo "INFO: Backup status is $backup_status, retrying in ${retry_interval}s..." elif [ "$backup_status" = "" ]; then echo "INFO: Backup status is $backup_status, retrying in ${retry_interval}s..." attempt=$((attempt+1)) elif [ "$backup_status" = "done" ]; then echo "INFO: Backup status is done." break else echo "ERROR: Backup failed with status: $backup_status" exit 1 fi elif echo "$describe_result" | grep -q "not found"; then echo "INFO: Backup metadata not found, retrying in ${retry_interval}s..." attempt=$((attempt+1)) else echo "ERROR: Unexpected: $describe_result" exit 1 fi sleep $retry_interval if [ $attempt -gt $max_retries ]; then echo "ERROR: Failed to get backup status after $max_retries attempts" exit 1 fi done set -e backup_status=$(echo "$describe_result" | jq -r '.status') if [ "$backup_status" != "done" ]; then echo "ERROR: Backup did not complete successfully, final status: $backup_status" exit 1 fi } function create_restore_signal() { phase=$1 kubectl apply -f - <&1) kubectl_get_exit_code=$? set -e # Wait for the restore signal ConfigMap to be created or updated if [[ "$kubectl_get_exit_code" -ne 0 ]]; then if [[ "$kubectl_get_result" == *"not found"* ]]; then create_restore_signal "start" fi else annotation_value=$(echo "$kubectl_get_result" | jq -r '.metadata.labels["apps.kubeblocks.io/restore-mongodb-shard"] // empty') if [[ "$annotation_value" == "start" ]]; then break elif [[ "$annotation_value" == "end" ]]; then echo "INFO: Restore completed, exiting." exit 0 else echo "INFO: Restore start signal is $annotation_value, updating..." create_restore_signal "start" fi fi sleep 1 done sleep 5 echo "INFO: Prepare restore start signal completed." } function process_restore_end_signal() { echo "INFO: Waiting for prepare restore end signal..." sleep 5 dp_cm_name="$CLUSTER_NAME-restore-signal" dp_cm_namespace="$CLUSTER_NAMESPACE" while true; do set +e kubectl_get_result=$(kubectl get configmap $dp_cm_name -n $dp_cm_namespace -o json 2>&1) kubectl_get_exit_code=$? set -e # Wait for the restore signal ConfigMap to be created or updated if [[ "$kubectl_get_exit_code" -ne 0 ]]; then if [[ "$kubectl_get_result" == *"not found"* ]]; then create_restore_signal "end" fi else annotation_value=$(echo "$kubectl_get_result" | jq -r '.metadata.labels["apps.kubeblocks.io/restore-mongodb-shard"] // empty') if [[ "$annotation_value" == "end" ]]; then break else echo "INFO: Restore end signal is $annotation_value, updating..." create_restore_signal "end" fi fi sleep 1 done echo "INFO: Prepare restore end signal completed." } function get_describe_backup_info() { describe_result="" local max_retries=60 local retry_interval=5 local attempt=1 set +e while [ $attempt -le $max_retries ]; do describe_result=$(pbm describe-backup --mongodb-uri "$PBM_MONGODB_URI" "$backup_name" -o json 2>&1) if [ $? -eq 0 ] && [ -n "$describe_result" ]; then break elif echo "$describe_result" | grep -q "not found"; then echo "INFO: Attempt $attempt: backup $backup_name not found, retrying in ${retry_interval}s..." if [ $((attempt % 30)) -eq 29 ]; then echo "INFO: Sync PBM config from storage again." sync_pbm_config_from_storage fi sleep $retry_interval ((attempt++)) continue else echo "ERROR: Failed to get backup metadata: $describe_result" exit 1 fi done set -e if [ -z "$describe_result" ] || echo "$describe_result" | grep -q "not found"; then echo "ERROR: Failed to get backup metadata after $max_retries attempts" exit 1 fi } function wait_for_restoring() { local cnf_file="${MOUNT_DIR}/tmp/pbm_restore.cnf" cat < ${MOUNT_DIR}/tmp/pbm_restore.cnf storage: type: s3 s3: region: ${S3_REGION} bucket: ${S3_BUCKET} prefix: ${S3_PREFIX} endpointUrl: ${S3_ENDPOINT} forcePathStyle: ${S3_FORCE_PATH_STYLE:-false} credentials: access-key-id: ${S3_ACCESS_KEY} secret-access-key: ${S3_SECRET_KEY} EOF local attempt=0 local max_retries=12 local try_interval=5 while true; do restore_status=$(pbm describe-restore "$restore_name" -c $cnf_file -o json | jq -r '.status') echo "INFO: Restore $restore_name status: $restore_status, retrying in ${try_interval}s..." if [ "$restore_status" = "done" ]; then rm $cnf_file break elif [ "$restore_status" = "starting" ] || [ "$restore_status" = "running" ]; then sleep $try_interval elif [ "$restore_status" = "" ]; then sleep $try_interval attempt=$((attempt+1)) if [ $attempt -gt $max_retries ]; then echo "ERROR: Restore $restore_name status is still empty after $max_retries retries" rm $cnf_file exit 1 fi else rm $cnf_file exit 1 fi done } #!/bin/bash set -e set -o pipefail export PATH="$PATH:$DP_DATASAFED_BIN_PATH:$MOUNT_DIR/tmp/bin" export DATASAFED_BACKEND_BASE_PATH="$DP_BACKUP_BASE_PATH" export_pbm_env_vars_for_rs set_backup_config_env export_logs_start_time_env trap handle_restore_exit EXIT wait_for_other_operations sync_pbm_storage_config sync_pbm_config_from_storage extras=$(cat /dp_downward/status_extras) backup_name=$(echo "$extras" | jq -r '.[0].backup_name') backup_type=$(echo "$extras" | jq -r '.[0].backup_type') if [ -z "$backup_type" ] || [ -z "$backup_name" ]; then echo "ERROR: Backup type or backup name is empty, skip restore." exit 1 fi get_describe_backup_info rs_name=$(echo "$describe_result" | jq -r '.replsets[0].name') mappings="$MONGODB_REPLICA_SET_NAME=$rs_name" echo "INFO: Replica set mappings: $mappings" process_restore_start_signal wait_for_other_operations restore_name=$(pbm restore $backup_name --mongodb-uri "$PBM_MONGODB_URI" --replset-remapping "$mappings" -o json | jq -r '.name') wait_for_restoring process_restore_end_signal State: Terminated Reason: Error Exit Code: 1 Started: Thu, 12 Feb 2026 16:34:41 +0800 Finished: Thu, 12 Feb 2026 16:36:41 +0800 Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: mongodb-trwkwn-backup-mongodb-env ConfigMap Optional: false Environment: DP_BACKUP_NAME: backup-ns-gtubu-mongodb-trwkwn-20260212162606 DP_TARGET_RELATIVE_PATH: DP_BACKUP_ROOT_PATH: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb DP_BACKUP_BASE_PATH: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212162606 DP_BACKUP_STOP_TIME: 2026-02-12T08:26:18Z DATA_DIR: /data/mongodb/db MOUNT_DIR: /data/mongodb PBM_BACKUP_DIR_NAME: pbm-backups PBM_BACKUP_TYPE: physical PBM_COMPRESSION: s2 PBM_RESTORE_DOWNLOAD_WORKERS: 4 PBM_IMAGE_TAG: 2.12.0 PSM_IMAGE_TAG: 8.0.17 MONGODB_USER: Optional: false MONGODB_PASSWORD: Optional: false POD_NAME: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 (v1:metadata.name) POD_NAMESPACE: ns-gtubu (v1:metadata.namespace) POD_UID: (v1:metadata.uid) POD_IP: (v1:status.podIP) PATH: /tools/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KB_SERVICE_CHARACTER_TYPE: mongodb SERVICE_PORT: $(KB_SERVICE_PORT) MONGODB_ROOT_USER: $(MONGODB_USER) MONGODB_ROOT_PASSWORD: $(MONGODB_PASSWORD) MONGODB_GRANT_ANYACTION_PRIVILEGE: true PBM_AGENT_MONGODB_USERNAME: $(MONGODB_USER) PBM_AGENT_MONGODB_PASSWORD: $(MONGODB_PASSWORD) PBM_MONGODB_REPLICA_SET: $(KB_CLUSTER_COMP_NAME) PBM_AGENT_SIDECAR: true PBM_AGENT_SIDECAR_SLEEP: 5 PBM_MONGODB_URI: mongodb://$(PBM_AGENT_MONGODB_USERNAME):$(PBM_AGENT_MONGODB_PASSWORD)@localhost:$(KB_SERVICE_PORT)/?authSource=admin KB_POD_FQDN: $(POD_NAME).$(CLUSTER_COMPONENT_NAME)-headless.$(CLUSTER_NAMESPACE).svc DP_DB_USER: Optional: false DP_DB_PASSWORD: Optional: false DP_DB_PORT: 27017 DP_DB_HOST: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless DP_DATASAFED_BIN_PATH: /bin/datasafed Mounts: /bin/datasafed from dp-datasafed-bin (rw) /data/mongodb from data (rw) /dp_downward/ from downward-volume (rw) /etc/datasafed from dp-datasafed-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9pxpz (ro) restore-manager: Container ID: containerd://fd2710fec2dfa00a17776ff042741fdf67567c7bf6c77637a423cbdf6a849d69 Image: docker.io/apecloud/kubeblocks-tools:1.0.2 Image ID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea Port: Host Port: Command: sh -c Args: set -o errexit set -o nounset sleep_seconds="1" signal_file="/dp_downward/stop_restore_manager" if [ "$sleep_seconds" -le 0 ]; then sleep_seconds=2 fi while true; do if [ -f "$signal_file" ] && [ "$(cat "$signal_file")" = "true" ]; then break fi echo "waiting for other restore workloads, sleep ${sleep_seconds}s" sleep "$sleep_seconds" done echo "restore manager stopped" State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 16:34:41 +0800 Finished: Thu, 12 Feb 2026 16:36:43 +0800 Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment: DP_DATASAFED_BIN_PATH: /bin/datasafed Mounts: /bin/datasafed from dp-datasafed-bin (rw) /dp_downward from downward-volume-sidecard (rw) /etc/datasafed from dp-datasafed-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9pxpz (ro) Conditions: Type Status PodReadyToStartContainers False Initialized True Ready False ContainersReady False PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-mongodb-trwkwn-backup-mongodb-0 ReadOnly: false downward-volume: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations['dataprotection.kubeblocks.io/backup-extras'] -> status_extras downward-volume-sidecard: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations['dataprotection.kubeblocks.io/stop-restore-manager'] -> stop_restore_manager dp-datasafed-config: Type: Secret (a volume populated by a Secret) SecretName: tool-config-backuprepo-kbcli-test-4fs2t9 Optional: false dp-datasafed-bin: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kube-api-access-9pxpz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=aks-cicdamdpool-14916756-vmss000002 Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m13s default-scheduler Successfully assigned ns-gtubu/restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 to aks-cicdamdpool-14916756-vmss000002 Normal Pulled 5m13s kubelet Container image "docker.io/apecloud/datasafed:0.2.3" already present on machine Normal Created 5m13s kubelet Created container: dp-copy-datasafed Normal Started 5m13s kubelet Started container dp-copy-datasafed Normal Pulled 5m12s kubelet Container image "docker.io/apecloud/percona-backup-mongodb:2.12.0" already present on machine Normal Created 5m12s kubelet Created container: restore Normal Started 5m11s kubelet Started container restore Normal Pulled 5m11s kubelet Container image "docker.io/apecloud/kubeblocks-tools:1.0.2" already present on machine Normal Created 5m11s kubelet Created container: restore-manager Normal Started 5m11s kubelet Started container restore-manager Warning FailedToRetrieveImagePullSecret 3m10s (x7 over 5m13s) kubelet Unable to retrieve some image pull secrets (kbcli-test-registry-key); attempting to pull the image may not succeed. ------------------------------------------------------------------------------------------------------------------  `kubectl describe pod restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc --namespace ns-gtubu `(B  Name: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc Namespace: ns-gtubu Priority: 0 Service Account: kubeblocks-dataprotection-worker Node: aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Start Time: Thu, 12 Feb 2026 16:29:15 +0800 Labels: app.kubernetes.io/managed-by=kubeblocks-dataprotection batch.kubernetes.io/controller-uid=7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 batch.kubernetes.io/job-name=restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 controller-uid=7bb1c68c-ac3e-4651-bd1b-7b10277cedf2 dataprotection.kubeblocks.io/restore=mongodb-trwkwn-backup-mongodb-9da370b0-postready job-name=restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 Annotations: dataprotection.kubeblocks.io/backup-extras: [{"backup_name":"2026-02-12T08:26:14Z","backup_type":"physical","last_write_time":"2026-02-12T08:26:16Z"}] dataprotection.kubeblocks.io/stop-restore-manager: true Status: Failed IP: 10.244.6.131 IPs: IP: 10.244.6.131 Controlled By: Job/restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-6-0-0 Init Containers: dp-copy-datasafed: Container ID: containerd://d24964e3510167658b9e4fe180b19b148fbe210330619ee077ddb94ec4bce1ec Image: docker.io/apecloud/datasafed:0.2.3 Image ID: docker.io/apecloud/datasafed@sha256:7775e8184fbc833ee089b33427c4981bd7cd7d98cce5aeff1a9856b5de966b0f Port: Host Port: Command: /bin/sh -c /scripts/install-datasafed.sh /bin/datasafed State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 16:29:16 +0800 Finished: Thu, 12 Feb 2026 16:29:16 +0800 Ready: True Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment: Mounts: /bin/datasafed from dp-datasafed-bin (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bh5h7 (ro) Containers: restore: Container ID: containerd://11d39685b9b7f9cdd04e1f84ed2dab5ca8e2c4420216b56d49b6e3f072d6713e Image: docker.io/apecloud/percona-backup-mongodb:2.12.0 Image ID: docker.io/apecloud/percona-backup-mongodb@sha256:16a65d6189650fa7c2bb8de02064fa94fed63c38665f67dff7d7355b66bd144d Port: Host Port: Command: bash -c #!/bin/bash # shellcheck disable=SC2086 function handle_exit() { exit_code=$? if [ $exit_code -ne 0 ]; then echo "failed with exit code $exit_code" touch "${DP_BACKUP_INFO_FILE}.exit" exit 1 fi } # log info file function DP_log() { msg=$1 local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S') echo "${curr_date} INFO: $msg" } # log error info function DP_error_log() { msg=$1 local curr_date=$(date -u '+%Y-%m-%d %H:%M:%S') echo "${curr_date} ERROR: $msg" } function buildJsonString() { local jsonString=${1} local key=${2} local value=${3} if [ ! -z "$jsonString" ];then jsonString="${jsonString}," fi echo "${jsonString}\"${key}\":\"${value}\"" } # Save backup status info file for syncing progress. # timeFormat: %Y-%m-%dT%H:%M:%SZ function DP_save_backup_status_info() { export PATH="$PATH:$DP_DATASAFED_BIN_PATH" export DATASAFED_BACKEND_BASE_PATH="$DP_BACKUP_BASE_PATH" local totalSize=$1 local startTime=$2 local stopTime=$3 local timeZone=$4 local extras=$5 local timeZoneStr="" if [ ! -z ${timeZone} ]; then timeZoneStr=",\"timeZone\":\"${timeZone}\"" fi if [ -z "${stopTime}" ];then echo "{\"totalSize\":\"${totalSize}\"}" > ${DP_BACKUP_INFO_FILE} elif [ -z "${startTime}" ];then echo "{\"totalSize\":\"${totalSize}\",\"extras\":[${extras}],\"timeRange\":{\"end\":\"${stopTime}\"${timeZoneStr}}}" > ${DP_BACKUP_INFO_FILE} else echo "{\"totalSize\":\"${totalSize}\",\"extras\":[${extras}],\"timeRange\":{\"start\":\"${startTime}\",\"end\":\"${stopTime}\"${timeZoneStr}}}" > ${DP_BACKUP_INFO_FILE} fi } function getToolConfigValue() { local var=$1 cat "$toolConfig" | grep "$var" | awk '{print $NF}' } function set_backup_config_env() { toolConfig=/etc/datasafed/datasafed.conf if [ ! -f ${toolConfig} ]; then DP_error_log "Config file not found: ${toolConfig}" exit 1 fi local provider="" local access_key_id="" local secret_access_key="" local region="" local endpoint="" local bucket="" IFS=$'\n' for line in $(cat ${toolConfig}); do line=$(eval echo $line) if [[ $line == "access_key_id"* ]]; then access_key_id=$(getToolConfigValue "$line") elif [[ $line == "secret_access_key"* ]]; then secret_access_key=$(getToolConfigValue "$line") elif [[ $line == "region"* ]]; then region=$(getToolConfigValue "$line") elif [[ $line == "endpoint"* ]]; then endpoint=$(getToolConfigValue "$line") elif [[ $line == "root"* ]]; then bucket=$(getToolConfigValue "$line") elif [[ $line == "provider"* ]]; then provider=$(getToolConfigValue "$line") fi done if [[ ! $endpoint =~ ^https?:// ]]; then endpoint="https://${endpoint}" fi if [[ "$provider" == "Alibaba" ]]; then regex='https?:\/\/oss-(.*?)\.aliyuncs\.com' if [[ "$endpoint" =~ $regex ]]; then region="${BASH_REMATCH[1]}" DP_log "Extract region from $endpoint-> $region" else DP_log "Failed to extract region from endpoint: $endpoint" fi elif [[ "$provider" == "TencentCOS" ]]; then regex='https?:\/\/cos\.(.*?)\.myqcloud\.com' if [[ "$endpoint" =~ $regex ]]; then region="${BASH_REMATCH[1]}" DP_log "Extract region from $endpoint-> $region" else DP_log "Failed to extract region from endpoint: $endpoint" fi elif [[ "$provider" == "Minio" ]]; then export S3_FORCE_PATH_STYLE="true" else echo "Unsupported provider: $provider" fi backup_path=$(dirname "$DP_BACKUP_BASE_PATH") export S3_ACCESS_KEY="${access_key_id}" export S3_SECRET_KEY="${secret_access_key}" export S3_REGION="${region}" export S3_ENDPOINT="${endpoint}" export S3_BUCKET="${bucket}" export S3_PREFIX="${backup_path#/}/$PBM_BACKUP_DIR_NAME" DP_log "storage config have been extracted." } # config backup agent generate_endpoints() { local fqdns=$1 local port=$2 if [ -z "$fqdns" ]; then echo "ERROR: No FQDNs provided for endpoints." >&2 exit 1 fi IFS=',' read -ra fqdn_array <<< "$fqdns" local endpoints=() for fqdn in "${fqdn_array[@]}"; do trimmed_fqdn=$(echo "$fqdn" | xargs) if [[ -n "$trimmed_fqdn" ]]; then endpoints+=("${trimmed_fqdn}:${port}") fi done IFS=','; echo "${endpoints[*]}" } function export_pbm_env_vars() { export PBM_AGENT_MONGODB_USERNAME="$MONGODB_USER" export PBM_AGENT_MONGODB_PASSWORD="$MONGODB_PASSWORD" cfg_server_endpoints="$(generate_endpoints "$CFG_SERVER_POD_FQDN_LIST" "$CFG_SERVER_INTERNAL_PORT")" export PBM_MONGODB_URI="mongodb://$PBM_AGENT_MONGODB_USERNAME:$PBM_AGENT_MONGODB_PASSWORD@$cfg_server_endpoints/?authSource=admin&replSetName=$CFG_SERVER_REPLICA_SET_NAME" } function export_pbm_env_vars_for_rs() { export PBM_AGENT_MONGODB_USERNAME="$MONGODB_USER" export PBM_AGENT_MONGODB_PASSWORD="$MONGODB_PASSWORD" mongodb_endpoints="$(generate_endpoints "$MONGODB_POD_FQDN_LIST" "$KB_SERVICE_PORT")" export PBM_MONGODB_URI="mongodb://$PBM_AGENT_MONGODB_USERNAME:$PBM_AGENT_MONGODB_PASSWORD@$mongodb_endpoints/?authSource=admin&replSetName=$MONGODB_REPLICA_SET_NAME" } function sync_pbm_storage_config() { echo "INFO: Checking if PBM storage config exists" pbm_config_exists=true check_config=$(pbm config --mongodb-uri "$PBM_MONGODB_URI" -o json) || { pbm_config_exists=false echo "INFO: PBM storage config does not exist." } if [ "$pbm_config_exists" = "true" ]; then # check_config=$(pbm config --mongodb-uri "$PBM_MONGODB_URI" -o json) current_endpoint=$(echo "$check_config" | jq -r '.storage.s3.endpointUrl') current_region=$(echo "$check_config" | jq -r '.storage.s3.region') current_bucket=$(echo "$check_config" | jq -r '.storage.s3.bucket') current_prefix=$(echo "$check_config" | jq -r '.storage.s3.prefix') echo "INFO: Current PBM storage endpoint: $current_endpoint" echo "INFO: Current PBM storage region: $current_region" echo "INFO: Current PBM storage bucket: $current_bucket" echo "INFO: Current PBM storage prefix: $current_prefix" if [ "$current_prefix" = "$S3_PREFIX" ] && [ "$current_region" = "$S3_REGION" ] && [ "$current_bucket" = "$S3_BUCKET" ] && [ "$current_endpoint" = "$S3_ENDPOINT" ]; then echo "INFO: PBM storage config already exists." else pbm_config_exists=false fi fi if [ "$pbm_config_exists" = "false" ]; then cat < /dev/null storage: type: s3 s3: region: ${S3_REGION} bucket: ${S3_BUCKET} prefix: ${S3_PREFIX} endpointUrl: ${S3_ENDPOINT} forcePathStyle: ${S3_FORCE_PATH_STYLE:-false} credentials: access-key-id: ${S3_ACCESS_KEY} secret-access-key: ${S3_SECRET_KEY} restore: numDownloadWorkers: ${PBM_RESTORE_DOWNLOAD_WORKERS:-4} backup: timeouts: startingStatus: 60 EOF sleep 5 echo "INFO: PBM storage configuration completed." fi } function print_pbm_logs_by_event() { local pbm_event=$1 # echo "INFO: Printing PBM logs by event: $pbm_event" # shellcheck disable=SC2328 local pbm_logs=$(pbm logs -e $pbm_event --tail 200 --mongodb-uri "$PBM_MONGODB_URI" > /dev/null) local purged_logs=$(echo "$pbm_logs" | awk -v start="$PBM_LOGS_START_TIME" '$1 >= start') if [ -z "$purged_logs" ]; then return fi echo "$purged_logs" # echo "INFO: PBM logs by event: $pbm_event printed." } function print_pbm_tail_logs() { echo "INFO: Printing PBM tail logs" pbm logs --tail 20 --mongodb-uri "$PBM_MONGODB_URI" } function handle_backup_exit() { exit_code=$? set +e if [ $exit_code -ne 0 ]; then print_pbm_tail_logs echo "failed with exit code $exit_code" touch "${DP_BACKUP_INFO_FILE}.exit" exit 1 fi } function handle_restore_exit() { exit_code=$? set +e if [ $exit_code -ne 0 ]; then print_pbm_tail_logs echo "failed with exit code $exit_code" exit 1 fi } function handle_pitr_exit() { exit_code=$? set +e if [[ "$PBM_DISABLE_PITR_WHEN_EXIT" == "true" ]]; then disable_pitr fi if [ $exit_code -ne 0 ]; then print_pbm_tail_logs echo "failed with exit code $exit_code" touch "${DP_BACKUP_INFO_FILE}.exit" exit 1 fi } function wait_for_other_operations() { status_result=$(pbm status --mongodb-uri "$PBM_MONGODB_URI" -o json) || { echo "INFO: PBM is not configured." return } local except_type=$1 local running_status=$(echo "$status_result" | jq -r '.running') local retry_count=0 local max_retries=60 while [ -n "$running_status" ] && [ "$running_status" != "{}" ] && [ $retry_count -lt $max_retries ]; do retry_count=$((retry_count+1)) local running_type=$(echo "$running_status" | jq -r '.type') if [ -n "$running_type" ] && [ "$running_type" = "$except_type" ]; then break fi echo "INFO: Other operation $running_type is running, waiting... ($retry_count/$max_retries)" sleep 5 running_status=$(pbm status --mongodb-uri "$PBM_MONGODB_URI" -o json | jq -r '.running') done if [ $retry_count -ge $max_retries ]; then echo "ERROR: Other operations are still running after $max_retries retries" exit 1 fi } function export_logs_start_time_env() { local logs_start_time=$(date +"%Y-%m-%dT%H:%M:%SZ") export PBM_LOGS_START_TIME="${logs_start_time}" } function sync_pbm_config_from_storage() { echo "INFO: Syncing PBM config from storage..." wait_for_other_operations pbm config --force-resync --mongodb-uri "$PBM_MONGODB_URI" # print_pbm_logs_by_event "resync" # resync wait flag might don't work wait_for_other_operations echo "INFO: PBM config synced from storage." } function wait_for_backup_completion() { describe_result="" local retry_interval=5 local attempt=1 local max_retries=12 set +e while true; do describe_result=$(pbm describe-backup --mongodb-uri "$PBM_MONGODB_URI" "$backup_name" -o json 2>&1) if [ $? -eq 0 ] && [ -n "$describe_result" ]; then backup_status=$(echo "$describe_result" | jq -r '.status') if [ "$backup_status" = "starting" ] || [ "$backup_status" = "running" ]; then echo "INFO: Backup status is $backup_status, retrying in ${retry_interval}s..." elif [ "$backup_status" = "" ]; then echo "INFO: Backup status is $backup_status, retrying in ${retry_interval}s..." attempt=$((attempt+1)) elif [ "$backup_status" = "done" ]; then echo "INFO: Backup status is done." break else echo "ERROR: Backup failed with status: $backup_status" exit 1 fi elif echo "$describe_result" | grep -q "not found"; then echo "INFO: Backup metadata not found, retrying in ${retry_interval}s..." attempt=$((attempt+1)) else echo "ERROR: Unexpected: $describe_result" exit 1 fi sleep $retry_interval if [ $attempt -gt $max_retries ]; then echo "ERROR: Failed to get backup status after $max_retries attempts" exit 1 fi done set -e backup_status=$(echo "$describe_result" | jq -r '.status') if [ "$backup_status" != "done" ]; then echo "ERROR: Backup did not complete successfully, final status: $backup_status" exit 1 fi } function create_restore_signal() { phase=$1 kubectl apply -f - <&1) kubectl_get_exit_code=$? set -e # Wait for the restore signal ConfigMap to be created or updated if [[ "$kubectl_get_exit_code" -ne 0 ]]; then if [[ "$kubectl_get_result" == *"not found"* ]]; then create_restore_signal "start" fi else annotation_value=$(echo "$kubectl_get_result" | jq -r '.metadata.labels["apps.kubeblocks.io/restore-mongodb-shard"] // empty') if [[ "$annotation_value" == "start" ]]; then break elif [[ "$annotation_value" == "end" ]]; then echo "INFO: Restore completed, exiting." exit 0 else echo "INFO: Restore start signal is $annotation_value, updating..." create_restore_signal "start" fi fi sleep 1 done sleep 5 echo "INFO: Prepare restore start signal completed." } function process_restore_end_signal() { echo "INFO: Waiting for prepare restore end signal..." sleep 5 dp_cm_name="$CLUSTER_NAME-restore-signal" dp_cm_namespace="$CLUSTER_NAMESPACE" while true; do set +e kubectl_get_result=$(kubectl get configmap $dp_cm_name -n $dp_cm_namespace -o json 2>&1) kubectl_get_exit_code=$? set -e # Wait for the restore signal ConfigMap to be created or updated if [[ "$kubectl_get_exit_code" -ne 0 ]]; then if [[ "$kubectl_get_result" == *"not found"* ]]; then create_restore_signal "end" fi else annotation_value=$(echo "$kubectl_get_result" | jq -r '.metadata.labels["apps.kubeblocks.io/restore-mongodb-shard"] // empty') if [[ "$annotation_value" == "end" ]]; then break else echo "INFO: Restore end signal is $annotation_value, updating..." create_restore_signal "end" fi fi sleep 1 done echo "INFO: Prepare restore end signal completed." } function get_describe_backup_info() { describe_result="" local max_retries=60 local retry_interval=5 local attempt=1 set +e while [ $attempt -le $max_retries ]; do describe_result=$(pbm describe-backup --mongodb-uri "$PBM_MONGODB_URI" "$backup_name" -o json 2>&1) if [ $? -eq 0 ] && [ -n "$describe_result" ]; then break elif echo "$describe_result" | grep -q "not found"; then echo "INFO: Attempt $attempt: backup $backup_name not found, retrying in ${retry_interval}s..." if [ $((attempt % 30)) -eq 29 ]; then echo "INFO: Sync PBM config from storage again." sync_pbm_config_from_storage fi sleep $retry_interval ((attempt++)) continue else echo "ERROR: Failed to get backup metadata: $describe_result" exit 1 fi done set -e if [ -z "$describe_result" ] || echo "$describe_result" | grep -q "not found"; then echo "ERROR: Failed to get backup metadata after $max_retries attempts" exit 1 fi } function wait_for_restoring() { local cnf_file="${MOUNT_DIR}/tmp/pbm_restore.cnf" cat < ${MOUNT_DIR}/tmp/pbm_restore.cnf storage: type: s3 s3: region: ${S3_REGION} bucket: ${S3_BUCKET} prefix: ${S3_PREFIX} endpointUrl: ${S3_ENDPOINT} forcePathStyle: ${S3_FORCE_PATH_STYLE:-false} credentials: access-key-id: ${S3_ACCESS_KEY} secret-access-key: ${S3_SECRET_KEY} EOF local attempt=0 local max_retries=12 local try_interval=5 while true; do restore_status=$(pbm describe-restore "$restore_name" -c $cnf_file -o json | jq -r '.status') echo "INFO: Restore $restore_name status: $restore_status, retrying in ${try_interval}s..." if [ "$restore_status" = "done" ]; then rm $cnf_file break elif [ "$restore_status" = "starting" ] || [ "$restore_status" = "running" ]; then sleep $try_interval elif [ "$restore_status" = "" ]; then sleep $try_interval attempt=$((attempt+1)) if [ $attempt -gt $max_retries ]; then echo "ERROR: Restore $restore_name status is still empty after $max_retries retries" rm $cnf_file exit 1 fi else rm $cnf_file exit 1 fi done } #!/bin/bash set -e set -o pipefail export PATH="$PATH:$DP_DATASAFED_BIN_PATH:$MOUNT_DIR/tmp/bin" export DATASAFED_BACKEND_BASE_PATH="$DP_BACKUP_BASE_PATH" export_pbm_env_vars_for_rs set_backup_config_env export_logs_start_time_env trap handle_restore_exit EXIT wait_for_other_operations sync_pbm_storage_config sync_pbm_config_from_storage extras=$(cat /dp_downward/status_extras) backup_name=$(echo "$extras" | jq -r '.[0].backup_name') backup_type=$(echo "$extras" | jq -r '.[0].backup_type') if [ -z "$backup_type" ] || [ -z "$backup_name" ]; then echo "ERROR: Backup type or backup name is empty, skip restore." exit 1 fi get_describe_backup_info rs_name=$(echo "$describe_result" | jq -r '.replsets[0].name') mappings="$MONGODB_REPLICA_SET_NAME=$rs_name" echo "INFO: Replica set mappings: $mappings" process_restore_start_signal wait_for_other_operations restore_name=$(pbm restore $backup_name --mongodb-uri "$PBM_MONGODB_URI" --replset-remapping "$mappings" -o json | jq -r '.name') wait_for_restoring process_restore_end_signal State: Terminated Reason: Error Exit Code: 1 Started: Thu, 12 Feb 2026 16:29:18 +0800 Finished: Thu, 12 Feb 2026 16:34:27 +0800 Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment Variables from: mongodb-trwkwn-backup-mongodb-env ConfigMap Optional: false Environment: DP_BACKUP_NAME: backup-ns-gtubu-mongodb-trwkwn-20260212162606 DP_TARGET_RELATIVE_PATH: DP_BACKUP_ROOT_PATH: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb DP_BACKUP_BASE_PATH: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212162606 DP_BACKUP_STOP_TIME: 2026-02-12T08:26:18Z DATA_DIR: /data/mongodb/db MOUNT_DIR: /data/mongodb PBM_BACKUP_DIR_NAME: pbm-backups PBM_BACKUP_TYPE: physical PBM_COMPRESSION: s2 PBM_RESTORE_DOWNLOAD_WORKERS: 4 PBM_IMAGE_TAG: 2.12.0 PSM_IMAGE_TAG: 8.0.17 MONGODB_USER: Optional: false MONGODB_PASSWORD: Optional: false POD_NAME: restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc (v1:metadata.name) POD_NAMESPACE: ns-gtubu (v1:metadata.namespace) POD_UID: (v1:metadata.uid) POD_IP: (v1:status.podIP) PATH: /tools/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KB_SERVICE_CHARACTER_TYPE: mongodb SERVICE_PORT: $(KB_SERVICE_PORT) MONGODB_ROOT_USER: $(MONGODB_USER) MONGODB_ROOT_PASSWORD: $(MONGODB_PASSWORD) MONGODB_GRANT_ANYACTION_PRIVILEGE: true PBM_AGENT_MONGODB_USERNAME: $(MONGODB_USER) PBM_AGENT_MONGODB_PASSWORD: $(MONGODB_PASSWORD) PBM_MONGODB_REPLICA_SET: $(KB_CLUSTER_COMP_NAME) PBM_AGENT_SIDECAR: true PBM_AGENT_SIDECAR_SLEEP: 5 PBM_MONGODB_URI: mongodb://$(PBM_AGENT_MONGODB_USERNAME):$(PBM_AGENT_MONGODB_PASSWORD)@localhost:$(KB_SERVICE_PORT)/?authSource=admin KB_POD_FQDN: $(POD_NAME).$(CLUSTER_COMPONENT_NAME)-headless.$(CLUSTER_NAMESPACE).svc DP_DB_USER: Optional: false DP_DB_PASSWORD: Optional: false DP_DB_PORT: 27017 DP_DB_HOST: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless DP_DATASAFED_BIN_PATH: /bin/datasafed Mounts: /bin/datasafed from dp-datasafed-bin (rw) /data/mongodb from data (rw) /dp_downward/ from downward-volume (rw) /etc/datasafed from dp-datasafed-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bh5h7 (ro) restore-manager: Container ID: containerd://07fc6a445c980e11402cb53e1b324fdad0b980c5580e9789b87c45353bbd93a8 Image: docker.io/apecloud/kubeblocks-tools:1.0.2 Image ID: docker.io/apecloud/kubeblocks-tools@sha256:52a60316d6ece80cb1440179a7902bad1129a8535c025722486f4f1986d095ea Port: Host Port: Command: sh -c Args: set -o errexit set -o nounset sleep_seconds="1" signal_file="/dp_downward/stop_restore_manager" if [ "$sleep_seconds" -le 0 ]; then sleep_seconds=2 fi while true; do if [ -f "$signal_file" ] && [ "$(cat "$signal_file")" = "true" ]; then break fi echo "waiting for other restore workloads, sleep ${sleep_seconds}s" sleep "$sleep_seconds" done echo "restore manager stopped" State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 12 Feb 2026 16:29:18 +0800 Finished: Thu, 12 Feb 2026 16:34:29 +0800 Ready: False Restart Count: 0 Limits: cpu: 0 memory: 0 Requests: cpu: 0 memory: 0 Environment: DP_DATASAFED_BIN_PATH: /bin/datasafed Mounts: /bin/datasafed from dp-datasafed-bin (rw) /dp_downward from downward-volume-sidecard (rw) /etc/datasafed from dp-datasafed-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bh5h7 (ro) Conditions: Type Status PodReadyToStartContainers False Initialized True Ready False ContainersReady False PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-mongodb-trwkwn-backup-mongodb-0 ReadOnly: false downward-volume: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations['dataprotection.kubeblocks.io/backup-extras'] -> status_extras downward-volume-sidecard: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations['dataprotection.kubeblocks.io/stop-restore-manager'] -> stop_restore_manager dp-datasafed-config: Type: Secret (a volume populated by a Secret) SecretName: tool-config-backuprepo-kbcli-test-4fs2t9 Optional: false dp-datasafed-bin: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kube-api-access-bh5h7: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=aks-cicdamdpool-14916756-vmss000002 Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned ns-gtubu/restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc to aks-cicdamdpool-14916756-vmss000002 Normal Pulled 10m kubelet Container image "docker.io/apecloud/datasafed:0.2.3" already present on machine Normal Created 10m kubelet Created container: dp-copy-datasafed Normal Started 10m kubelet Started container dp-copy-datasafed Normal Pulled 10m kubelet Container image "docker.io/apecloud/percona-backup-mongodb:2.12.0" already present on machine Normal Created 10m kubelet Created container: restore Normal Started 10m kubelet Started container restore Normal Pulled 10m kubelet Container image "docker.io/apecloud/kubeblocks-tools:1.0.2" already present on machine Normal Created 10m kubelet Created container: restore-manager Normal Started 10m kubelet Started container restore-manager Warning FailedToRetrieveImagePullSecret 5m24s (x10 over 10m) kubelet Unable to retrieve some image pull secrets (kbcli-test-registry-key); attempting to pull the image may not succeed. ------------------------------------------------------------------------------------------------------------------ --------------------------------------pod restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc--------------------------------------  `kubectl logs restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-flgw4 --namespace ns-gtubu --tail 500`(B  2026-02-12 08:34:41 INFO: storage config have been extracted. INFO: PBM is not configured. INFO: Checking if PBM storage config exists INFO: PBM storage config does not exist. Error: connect to mongodb: create mongo connection: ping: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc.cluster.local:27017, Type: Unknown, Last error: dial tcp 10.244.6.204:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc.cluster.local:27017, Type: Unknown, Last error: dial tcp 10.244.4.36:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: Unknown, Last error: dial tcp 10.244.6.204:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: Unknown, Last error: dial tcp 10.244.4.36:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-2.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: RSSecondary, Average RTT: 10459241 }, ] } INFO: Printing PBM tail logs Error: connect to mongodb: create mongo connection: ping: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc.cluster.local:27017, Type: Unknown, Last error: dial tcp 10.244.6.204:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc.cluster.local:27017, Type: Unknown, Last error: dial tcp 10.244.4.36:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: Unknown, Last error: dial tcp 10.244.6.204:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: Unknown, Last error: dial tcp 10.244.4.36:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-2.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: RSSecondary, Average RTT: 692824 }, ] } failed with exit code 1 ------------------------------------------------------------------------------------------------------------------  `kubectl logs restore-post-ready-b43756ab-backup-ns-gtubu-mongodb-trwkw-jtvlc --namespace ns-gtubu --tail 500`(B  2026-02-12 08:29:18 INFO: storage config have been extracted. Error: get status of cluster: get cluster status: get config: get: mongo: no documents in result INFO: PBM is not configured. INFO: Checking if PBM storage config exists Error: get: mongo: no documents in result INFO: PBM storage config does not exist. INFO: PBM storage configuration completed. INFO: Syncing PBM config from storage... Storage resync started INFO: Other operation resync is running, waiting... (1/60) INFO: Other operation resync is running, waiting... (2/60) INFO: PBM config synced from storage. INFO: Replica set mappings: mongodb-trwkwn-backup-mongodb=mongodb-trwkwn-mongodb INFO: Waiting for prepare restore start signal... configmap/mongodb-trwkwn-backup-restore-signal created INFO: Prepare restore start signal completed. INFO: Restore 2026-02-12T08:29:45.942732824Z status: , retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: , retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: , retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: starting, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: starting, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: starting, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: running, retrying in 5s... INFO: Restore 2026-02-12T08:29:45.942732824Z status: partlyDone, retrying in 5s... INFO: Printing PBM tail logs Error: connect to mongodb: create mongo connection: ping: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc.cluster.local:27017, Type: Unknown, Last error: dial tcp 10.244.6.204:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc.cluster.local:27017, Type: Unknown, Last error: dial tcp 10.244.4.36:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: Unknown, Last error: dial tcp 10.244.6.204:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: Unknown, Last error: dial tcp 10.244.4.36:27017: connect: connection refused }, { Addr: mongodb-trwkwn-backup-mongodb-2.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017, Type: RSSecondary, Average RTT: 673767 }, ] } failed with exit code 1 ------------------------------------------------------------------------------------------------------------------  `kbcli cluster describe-backup --names backup-ns-gtubu-mongodb-trwkwn-20260212162606 --namespace ns-gtubu `(B  Name: backup-ns-gtubu-mongodb-trwkwn-20260212162606 Cluster: mongodb-trwkwn Namespace: ns-gtubu Spec: Method: pbm-physical Policy Name: mongodb-trwkwn-mongodb-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-gtubu-mongodb-trwkwn-20260212162606-8e203 TargetPodName: mongodb-trwkwn-mongodb-0 Phase: Completed Start Time: Feb 12,2026 16:26 UTC+0800 Completion Time: Feb 12,2026 16:26 UTC+0800 Extras: =================== 1 =================== backupName: 2026-02-12T08:26:14Z backupType: physical lastWriteTime: 2026-02-12T08:26:16Z Status: Phase: Completed Total Size: 588450 ActionSet Name: mongodb-rs-pbm-physical Repository: backuprepo-kbcli-test Duration: 21s Start Time: Feb 12,2026 16:26 UTC+0800 Completion Time: Feb 12,2026 16:26 UTC+0800 Path: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212162606 Time Range Start: Feb 12,2026 16:26 UTC+0800 Time Range End: Feb 12,2026 16:26 UTC+0800 Warning Events: cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn-backup`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo " echo \"rs.status()\" | mongosh --host mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-backup-mongodb-0 --namespace ns-gtubu -- bash `(B  check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B check connect cluster...(B [Error] connect cluster timeout(B delete cluster mongodb-trwkwn-backup  `kbcli cluster delete mongodb-trwkwn-backup --auto-approve --namespace ns-gtubu `(B  pod_info:mongodb-trwkwn-backup-mongodb-0 4/4 Running 0 18m mongodb-trwkwn-backup-mongodb-1 4/4 Running 0 17m mongodb-trwkwn-backup-mongodb-2 4/4 Running 1 (13m ago) 17m Cluster mongodb-trwkwn-backup deleted pod_info:mongodb-trwkwn-backup-mongodb-0 4/4 Terminating 0 18m mongodb-trwkwn-backup-mongodb-1 4/4 Terminating 0 17m mongodb-trwkwn-backup-mongodb-2 4/4 Terminating 1 (14m ago) 17m delete cluster pod done(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B checking pvc non-exist(B [Error] check cluster resource non-exist TIMED-OUT: pvc(B data-mongodb-trwkwn-backup-mongodb-0  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge pvc data-mongodb-trwkwn-backup-mongodb-0 --namespace ns-gtubu `(B  persistentvolumeclaim/data-mongodb-trwkwn-backup-mongodb-0 patched delete cluster done(B check resource cm non exists check resource cm non exists(B cluster delete backup  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge backups backup-ns-gtubu-mongodb-trwkwn-20260212162606 --namespace ns-gtubu `(B  backup.dataprotection.kubeblocks.io/backup-ns-gtubu-mongodb-trwkwn-20260212162606 patched  `kbcli cluster delete-backup mongodb-trwkwn --name backup-ns-gtubu-mongodb-trwkwn-20260212162606 --force --auto-approve --namespace ns-gtubu `(B  Backup backup-ns-gtubu-mongodb-trwkwn-20260212162606 deleted  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `db.msg.drop();db.createCollection('msg');db.msg.insertOne({msg: 'kbcli-test-data-trwkwn0',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn1',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn2',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn3',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn4',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn5',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn6',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn7',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn8',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn9',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn10',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn11',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn12',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn13',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn14',time: new Date()});db.msg.insertOne({msg: 'kbcli-test-data-trwkwn15',time: new Date()});`(B  Current Mongosh Log ID: 698d93985d93ccef538b79a1 Connecting to: mongodb://@mongodb-trwkwn-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:08:11.837+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:08:18.230+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:08:18.231+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:08:18.231+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:08:18.231+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-mongodb [direct: primary] admin> { acknowledged: true, insertedId: ObjectId('698d93a25d93ccef538b79b1') } mongodb-trwkwn-mongodb [direct: primary] admin> cluster dump backup  `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.name}"`(B   `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="{.spec.credential.namespace}"`(B   `kubectl get secrets kb-backuprepo-pn64t -n kb-wrwyg -o jsonpath="{.data.accessKeyId}"`(B   `kubectl get secrets kb-backuprepo-pn64t -n kb-wrwyg -o jsonpath="{.data.secretAccessKey}"`(B  KUBEBLOCKS NAMESPACE:kb-wrwyg get kubeblocks namespace done(B  `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-wrwyg -o jsonpath="{.items[0].data.root-user}"`(B   `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-wrwyg -o jsonpath="{.items[0].data.root-password}"`(B  minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-wrwyg.svc.cluster.local:9000 list minio bucket kbcli-test  `echo 'mc alias set minioserver http://kbcli-test-minio.kb-wrwyg.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-546f6447c7-cvf8k --namespace kb-wrwyg -- bash`(B  list minio bucket done(B default backuprepo:backuprepo-kbcli-test exists(B  `kbcli cluster backup mongodb-trwkwn --method dump --namespace ns-gtubu `(B  Backup backup-ns-gtubu-mongodb-trwkwn-20260212164734 created successfully, you can view the progress: kbcli cluster list-backups --names=backup-ns-gtubu-mongodb-trwkwn-20260212164734 -n ns-gtubu check backup status  `kbcli cluster list-backups mongodb-trwkwn --namespace ns-gtubu `(B  NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-gtubu-mongodb-trwkwn-20260212164734 ns-gtubu mongodb-trwkwn dump Running Delete Feb 12,2026 16:47 UTC+0800 backup_status:mongodb-trwkwn-dump-Running(B backup_status:mongodb-trwkwn-dump-Running(B check backup status done(B backup_status:backup-ns-gtubu-mongodb-trwkwn-20260212164734 ns-gtubu mongodb-trwkwn dump Completed 40430 11s Delete Feb 12,2026 16:47 UTC+0800 Feb 12,2026 16:47 UTC+0800 (B cluster restore backup  `kbcli cluster describe-backup --names backup-ns-gtubu-mongodb-trwkwn-20260212164734 --namespace ns-gtubu `(B  Name: backup-ns-gtubu-mongodb-trwkwn-20260212164734 Cluster: mongodb-trwkwn Namespace: ns-gtubu Spec: Method: dump Policy Name: mongodb-trwkwn-mongodb-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-gtubu-mongodb-trwkwn-20260212164734-bcd9f TargetPodName: mongodb-trwkwn-mongodb-0 Phase: Completed Start Time: Feb 12,2026 16:47 UTC+0800 Completion Time: Feb 12,2026 16:47 UTC+0800 Status: Phase: Completed Total Size: 40430 ActionSet Name: mongodb-dump-br Repository: backuprepo-kbcli-test Duration: 11s Start Time: Feb 12,2026 16:47 UTC+0800 Completion Time: Feb 12,2026 16:47 UTC+0800 Path: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212164734 Time Range Start: Feb 12,2026 16:47 UTC+0800 Time Range End: Feb 12,2026 16:47 UTC+0800 Warning Events:  `kbcli cluster restore mongodb-trwkwn-backup --backup backup-ns-gtubu-mongodb-trwkwn-20260212164734 --namespace ns-gtubu `(B  Cluster mongodb-trwkwn-backup created check cluster status  `kbcli cluster list mongodb-trwkwn-backup --show-labels --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-trwkwn-backup ns-gtubu mongodb WipeOut Creating Feb 12,2026 16:47 UTC+0800 clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B cluster_status:Creating(B check cluster status done(B cluster_status:Running(B check pod status  `kbcli cluster list-instances mongodb-trwkwn-backup --namespace ns-gtubu `(B  NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-trwkwn-backup-mongodb-0 ns-gtubu mongodb-trwkwn-backup mongodb Running primary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000000/10.224.0.9 Feb 12,2026 16:47 UTC+0800 mongodb-trwkwn-backup-mongodb-1 ns-gtubu mongodb-trwkwn-backup mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000002/10.224.0.8 Feb 12,2026 16:48 UTC+0800 mongodb-trwkwn-backup-mongodb-2 ns-gtubu mongodb-trwkwn-backup mongodb Running secondary 0 200m / 200m 644245094400m / 644245094400m data:7Gi aks-cicdamdpool-14916756-vmss000005/10.224.0.10 Feb 12,2026 16:48 UTC+0800 check pod status done(B check cluster role check cluster role done(B primary(B: mongodb-trwkwn-backup-mongodb-0;secondary(B: mongodb-trwkwn-backup-mongodb-1 mongodb-trwkwn-backup-mongodb-2  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn-backup`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B check cluster connect  `echo " echo \"\" | mongosh --host mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-backup-mongodb-0 --namespace ns-gtubu -- bash`(B  check cluster connect done(B check backup restore post ready post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B post_ready_pod_exists:(B check backup restore post ready exists timeout(B check backup restore post ready done(B  `kbcli cluster describe-backup --names backup-ns-gtubu-mongodb-trwkwn-20260212164734 --namespace ns-gtubu `(B  Name: backup-ns-gtubu-mongodb-trwkwn-20260212164734 Cluster: mongodb-trwkwn Namespace: ns-gtubu Spec: Method: dump Policy Name: mongodb-trwkwn-mongodb-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-gtubu-mongodb-trwkwn-20260212164734-bcd9f TargetPodName: mongodb-trwkwn-mongodb-0 Phase: Completed Start Time: Feb 12,2026 16:47 UTC+0800 Completion Time: Feb 12,2026 16:47 UTC+0800 Status: Phase: Completed Total Size: 40430 ActionSet Name: mongodb-dump-br Repository: backuprepo-kbcli-test Duration: 11s Start Time: Feb 12,2026 16:47 UTC+0800 Completion Time: Feb 12,2026 16:47 UTC+0800 Path: /ns-gtubu/mongodb-trwkwn-5f717ad4-5f30-4aef-b429-b1069b5ec660/mongodb/backup-ns-gtubu-mongodb-trwkwn-20260212164734 Time Range Start: Feb 12,2026 16:47 UTC+0800 Time Range End: Feb 12,2026 16:47 UTC+0800 Warning Events:  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn-backup`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `db.msg.find();`(B  Current Mongosh Log ID: 698d94b03773e4adeb8b79a1 Connecting to: mongodb://@mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:48:15.531+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:48:18.601+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:48:18.601+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:48:18.601+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:48:18.601+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-backup-mongodb [direct: primary] admin> [ { _id: ObjectId('698d93a05d93ccef538b79a2'), msg: 'kbcli-test-data-trwkwn0', time: ISODate('2026-02-12T08:47:28.638Z') }, { _id: ObjectId('698d93a15d93ccef538b79a3'), msg: 'kbcli-test-data-trwkwn1', time: ISODate('2026-02-12T08:47:29.031Z') }, { _id: ObjectId('698d93a15d93ccef538b79a4'), msg: 'kbcli-test-data-trwkwn2', time: ISODate('2026-02-12T08:47:29.138Z') }, { _id: ObjectId('698d93a15d93ccef538b79a5'), msg: 'kbcli-test-data-trwkwn3', time: ISODate('2026-02-12T08:47:29.331Z') }, { _id: ObjectId('698d93a15d93ccef538b79a6'), msg: 'kbcli-test-data-trwkwn4', time: ISODate('2026-02-12T08:47:29.432Z') }, { _id: ObjectId('698d93a15d93ccef538b79a7'), msg: 'kbcli-test-data-trwkwn5', time: ISODate('2026-02-12T08:47:29.537Z') }, { _id: ObjectId('698d93a15d93ccef538b79a8'), msg: 'kbcli-test-data-trwkwn6', time: ISODate('2026-02-12T08:47:29.640Z') }, { _id: ObjectId('698d93a15d93ccef538b79a9'), msg: 'kbcli-test-data-trwkwn7', time: ISODate('2026-02-12T08:47:29.733Z') }, { _id: ObjectId('698d93a15d93ccef538b79aa'), msg: 'kbcli-test-data-trwkwn8', time: ISODate('2026-02-12T08:47:29.931Z') }, { _id: ObjectId('698d93a25d93ccef538b79ab'), msg: 'kbcli-test-data-trwkwn9', time: ISODate('2026-02-12T08:47:30.132Z') }, { _id: ObjectId('698d93a25d93ccef538b79ac'), msg: 'kbcli-test-data-trwkwn10', time: ISODate('2026-02-12T08:47:30.333Z') }, { _id: ObjectId('698d93a25d93ccef538b79ad'), msg: 'kbcli-test-data-trwkwn11', time: ISODate('2026-02-12T08:47:30.439Z') }, { _id: ObjectId('698d93a25d93ccef538b79ae'), msg: 'kbcli-test-data-trwkwn12', time: ISODate('2026-02-12T08:47:30.632Z') }, { _id: ObjectId('698d93a25d93ccef538b79af'), msg: 'kbcli-test-data-trwkwn13', time: ISODate('2026-02-12T08:47:30.639Z') }, { _id: ObjectId('698d93a25d93ccef538b79b0'), msg: 'kbcli-test-data-trwkwn14', time: ISODate('2026-02-12T08:47:30.832Z') }, { _id: ObjectId('698d93a25d93ccef538b79b1'), msg: 'kbcli-test-data-trwkwn15', time: ISODate('2026-02-12T08:47:30.936Z') } ] mongodb-trwkwn-backup-mongodb [direct: primary] admin> dump backup check data Success(B cluster connect  `kubectl get secrets -l app.kubernetes.io/instance=mongodb-trwkwn-backup`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.username}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.password}"`(B   `kubectl get secrets mongodb-trwkwn-backup-mongodb-account-root -o jsonpath="{.data.port}"`(B  DB_USERNAME:root;DB_PASSWORD:g300cV7275bHJW7t;DB_PORT:27017;DB_DATABASE:(B  `echo " echo \"rs.status()\" | mongosh --host mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local --port 27017 -u root -p g300cV7275bHJW7t --authenticationDatabase admin admin " | kubectl exec -it mongodb-trwkwn-backup-mongodb-0 --namespace ns-gtubu -- bash `(B  Current Mongosh Log ID: 698d94bce7412c9e588b79a1 Connecting to: mongodb://@mongodb-trwkwn-backup-mongodb-mongodb.ns-gtubu.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.10 Using MongoDB: 8.0.17-6 Using Mongosh: 2.5.10 mongosh 2.6.0 is available for download: https://www.mongodb.com/try/download/shell For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2026-02-12T08:48:15.531+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem 2026-02-12T08:48:18.601+00:00: You are running this process as the root user, which is not recommended 2026-02-12T08:48:18.601+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile 2026-02-12T08:48:18.601+00:00: We suggest setting the contents of sysfsFile to 0. 2026-02-12T08:48:18.601+00:00: vm.max_map_count is too low ------ mongodb-trwkwn-backup-mongodb [direct: primary] admin> { set: 'mongodb-trwkwn-backup-mongodb', date: ISODate('2026-02-12T08:52:18.036Z'), myState: 1, term: Long('1'), syncSourceHost: '', syncSourceId: -1, heartbeatIntervalMillis: Long('2000'), majorityVoteCount: 2, writeMajorityCount: 2, votingMembersCount: 3, writableVotingMembersCount: 3, optimes: { lastCommittedOpTime: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, lastCommittedWallTime: ISODate('2026-02-12T08:52:16.444Z'), readConcernMajorityOpTime: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, appliedOpTime: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, durableOpTime: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, writtenOpTime: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, lastAppliedWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastDurableWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastWrittenWallTime: ISODate('2026-02-12T08:52:16.444Z') }, lastStableRecoveryTimestamp: Timestamp({ t: 1770886279, i: 1 }), electionCandidateMetrics: { lastElectionReason: 'electionTimeout', lastElectionDate: ISODate('2026-02-12T08:48:19.534Z'), electionTerm: Long('1'), lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1770886099, i: 1 }), t: Long('-1') }, lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1770886099, i: 1 }), t: Long('-1') }, lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1770886099, i: 1 }), t: Long('-1') }, numVotesNeeded: 1, priorityAtElection: 2, electionTimeoutMillis: Long('10000'), newTermStartDate: ISODate('2026-02-12T08:48:19.661Z'), wMajorityWriteAvailabilityDate: ISODate('2026-02-12T08:48:19.771Z') }, members: [ { _id: 0, name: 'mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 243, optime: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, optimeDate: ISODate('2026-02-12T08:52:16.000Z'), optimeWritten: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, optimeWrittenDate: ISODate('2026-02-12T08:52:16.000Z'), lastAppliedWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastDurableWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastWrittenWallTime: ISODate('2026-02-12T08:52:16.444Z'), syncSourceHost: '', syncSourceId: -1, infoMessage: '', electionTime: Timestamp({ t: 1770886099, i: 2 }), electionDate: ISODate('2026-02-12T08:48:19.000Z'), configVersion: 5, configTerm: 1, self: true, lastHeartbeatMessage: '' }, { _id: 1, name: 'mongodb-trwkwn-backup-mongodb-1.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 189, optime: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, optimeDurable: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, optimeWritten: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, optimeDate: ISODate('2026-02-12T08:52:16.000Z'), optimeDurableDate: ISODate('2026-02-12T08:52:16.000Z'), optimeWrittenDate: ISODate('2026-02-12T08:52:16.000Z'), lastAppliedWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastDurableWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastWrittenWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastHeartbeat: ISODate('2026-02-12T08:52:17.131Z'), lastHeartbeatRecv: ISODate('2026-02-12T08:52:16.932Z'), pingMs: Long('0'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-trwkwn-backup-mongodb-2.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', syncSourceId: 2, infoMessage: '', configVersion: 5, configTerm: 1 }, { _id: 2, name: 'mongodb-trwkwn-backup-mongodb-2.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 167, optime: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, optimeDurable: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, optimeWritten: { ts: Timestamp({ t: 1770886336, i: 1 }), t: Long('1') }, optimeDate: ISODate('2026-02-12T08:52:16.000Z'), optimeDurableDate: ISODate('2026-02-12T08:52:16.000Z'), optimeWrittenDate: ISODate('2026-02-12T08:52:16.000Z'), lastAppliedWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastDurableWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastWrittenWallTime: ISODate('2026-02-12T08:52:16.444Z'), lastHeartbeat: ISODate('2026-02-12T08:52:17.032Z'), lastHeartbeatRecv: ISODate('2026-02-12T08:52:16.733Z'), pingMs: Long('0'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-trwkwn-backup-mongodb-0.mongodb-trwkwn-backup-mongodb-headless.ns-gtubu.svc:27017', syncSourceId: 0, infoMessage: '', configVersion: 5, configTerm: 1 } ], ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1770886336, i: 1 }), signature: { hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0), keyId: Long('0') } }, operationTime: Timestamp({ t: 1770886336, i: 1 }) } mongodb-trwkwn-backup-mongodb [direct: primary] admin> connect cluster Success(B delete cluster mongodb-trwkwn-backup  `kbcli cluster delete mongodb-trwkwn-backup --auto-approve --namespace ns-gtubu `(B  pod_info:mongodb-trwkwn-backup-mongodb-0 4/4 Running 0 4m29s mongodb-trwkwn-backup-mongodb-1 4/4 Running 0 3m59s mongodb-trwkwn-backup-mongodb-2 4/4 Running 0 3m27s Cluster mongodb-trwkwn-backup deleted pod_info:mongodb-trwkwn-backup-mongodb-0 4/4 Terminating 0 4m49s delete cluster pod done(B check cluster resource non-exist OK: pvc(B delete cluster done(B check resource cm non exists check resource cm non exists(B cluster delete backup  `kubectl patch -p '{"metadata":{"finalizers":[]}}' --type=merge backups backup-ns-gtubu-mongodb-trwkwn-20260212164734 --namespace ns-gtubu `(B  backup.dataprotection.kubeblocks.io/backup-ns-gtubu-mongodb-trwkwn-20260212164734 patched  `kbcli cluster delete-backup mongodb-trwkwn --name backup-ns-gtubu-mongodb-trwkwn-20260212164734 --force --auto-approve --namespace ns-gtubu `(B  Backup backup-ns-gtubu-mongodb-trwkwn-20260212164734 deleted cluster list-logs  `kbcli cluster list-logs mongodb-trwkwn --namespace ns-gtubu `(B  cluster logs  `kbcli cluster logs mongodb-trwkwn --tail 30 --namespace ns-gtubu `(B  2026-02-12T08:38:26Z INFO HA This member is Cluster's leader 2026-02-12T08:38:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:39:26Z INFO HA This member is Cluster's leader 2026-02-12T08:39:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:40:26Z INFO HA This member is Cluster's leader 2026-02-12T08:40:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:41:26Z INFO HA This member is Cluster's leader 2026-02-12T08:41:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:42:26Z INFO HA This member is Cluster's leader 2026-02-12T08:42:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:43:26Z INFO HA This member is Cluster's leader 2026-02-12T08:43:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:44:26Z INFO HA This member is Cluster's leader 2026-02-12T08:44:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:45:26Z INFO HA This member is Cluster's leader 2026-02-12T08:45:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:46:26Z INFO HA This member is Cluster's leader 2026-02-12T08:46:26Z DEBUG HA Refresh leader ttl 2026-02-12T08:47:28Z INFO HA This member is Cluster's leader 2026-02-12T08:47:28Z DEBUG HA Refresh leader ttl 2026-02-12T08:48:27Z INFO HA This member is Cluster's leader 2026-02-12T08:48:27Z DEBUG HA Refresh leader ttl 2026-02-12T08:49:27Z INFO HA This member is Cluster's leader 2026-02-12T08:49:27Z DEBUG HA Refresh leader ttl 2026-02-12T08:50:27Z INFO HA This member is Cluster's leader 2026-02-12T08:50:27Z DEBUG HA Refresh leader ttl 2026-02-12T08:51:27Z INFO HA This member is Cluster's leader 2026-02-12T08:51:27Z DEBUG HA Refresh leader ttl 2026-02-12T08:52:27Z INFO HA This member is Cluster's leader 2026-02-12T08:52:27Z DEBUG HA Refresh leader ttl cluster logs running  `kbcli cluster logs mongodb-trwkwn --tail 30 --file-type=running --namespace ns-gtubu `(B  ==> /data/mongodb/logs/mongodb.log <== {"t":{"$date":"2026-02-12T08:53:07.930+00:00"},"s":"I", "c":"ACCESS", "id":10483900,"ctx":"conn45878","msg":"Connection not authenticating","attr":{"client":"10.244.6.95:51926","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T08:53:07.931+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn45879","msg":"client metadata","attr":{"remote":"10.244.6.95:51934","client":"conn45879","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T08:53:07.931+00:00"},"s":"I", "c":"ACCESS", "id":6788604, "ctx":"conn45879","msg":"Auth metrics report","attr":{"metric":"acquireUser","micros":0}} {"t":{"$date":"2026-02-12T08:53:07.937+00:00"},"s":"I", "c":"ACCESS", "id":5286306, "ctx":"conn45876","msg":"Successfully authenticated","attr":{"client":"10.244.4.139:52148","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","result":0,"metrics":{"conversation_duration":{"micros":69678,"summary":{"0":{"step":1,"step_total":2,"duration_micros":59},"1":{"step":2,"step_total":2,"duration_micros":32}}}},"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"},"extraInfo":{}}} {"t":{"$date":"2026-02-12T08:53:07.937+00:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn45876","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":0}} {"t":{"$date":"2026-02-12T08:53:07.937+00:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn45875","msg":"Interrupted operation as its client disconnected","attr":{"opId":47101953}} {"t":{"$date":"2026-02-12T08:53:07.937+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45876","msg":"Connection ended","attr":{"remote":"10.244.4.139:52148","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"84c19196-ded8-451c-9b8e-c768637dab45"}},"connectionId":45876,"connectionCount":44}} {"t":{"$date":"2026-02-12T08:53:07.937+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45874","msg":"Connection ended","attr":{"remote":"10.244.4.139:52132","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"10824f8a-42f5-4a4f-a96b-624bacd1bc94"}},"connectionId":45874,"connectionCount":43}} {"t":{"$date":"2026-02-12T08:53:07.938+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45875","msg":"Connection ended","attr":{"remote":"10.244.4.139:52140","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"a8e7b3de-666d-46b6-92c6-cb4006544566"}},"connectionId":45875,"connectionCount":42}} {"t":{"$date":"2026-02-12T08:53:07.940+00:00"},"s":"I", "c":"ACCESS", "id":5286306, "ctx":"conn45879","msg":"Successfully authenticated","attr":{"client":"10.244.6.95:51934","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","result":0,"metrics":{"conversation_duration":{"micros":9027,"summary":{"0":{"step":1,"step_total":2,"duration_micros":58},"1":{"step":2,"step_total":2,"duration_micros":20}}}},"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"},"extraInfo":{}}} {"t":{"$date":"2026-02-12T08:53:07.940+00:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn45879","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":0}} {"t":{"$date":"2026-02-12T08:53:07.940+00:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn45877","msg":"Interrupted operation as its client disconnected","attr":{"opId":47104001}} {"t":{"$date":"2026-02-12T08:53:07.941+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45878","msg":"Connection ended","attr":{"remote":"10.244.6.95:51926","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"9e734da8-c557-44fd-b979-feb01b21ea47"}},"connectionId":45878,"connectionCount":41}} {"t":{"$date":"2026-02-12T08:53:07.941+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45879","msg":"Connection ended","attr":{"remote":"10.244.6.95:51934","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"c0d9f583-7773-473e-ad38-432b25efc82f"}},"connectionId":45879,"connectionCount":40}} {"t":{"$date":"2026-02-12T08:53:07.941+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45877","msg":"Connection ended","attr":{"remote":"10.244.6.95:51932","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"ffc4c0c5-b95a-4bed-b1f8-8436b6577846"}},"connectionId":45877,"connectionCount":39}} {"t":{"$date":"2026-02-12T08:53:07.941+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.4.139:52152","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"771f0dce-cc9c-4908-937e-76e4870cc6d5"}},"connectionId":45880,"connectionCount":40}} {"t":{"$date":"2026-02-12T08:53:07.941+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.4.139:52164","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"53aff018-24ea-47c4-a613-7dfe779ed547"}},"connectionId":45881,"connectionCount":41}} {"t":{"$date":"2026-02-12T08:53:07.941+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn45880","msg":"client metadata","attr":{"remote":"10.244.4.139:52152","client":"conn45880","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T08:53:07.941+00:00"},"s":"I", "c":"ACCESS", "id":10483900,"ctx":"conn45880","msg":"Connection not authenticating","attr":{"client":"10.244.4.139:52152","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T08:53:07.942+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn45881","msg":"client metadata","attr":{"remote":"10.244.4.139:52164","client":"conn45881","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T08:53:07.942+00:00"},"s":"I", "c":"ACCESS", "id":10483900,"ctx":"conn45881","msg":"Connection not authenticating","attr":{"client":"10.244.4.139:52164","doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T08:53:08.035+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.4.139:52176","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"713bb020-9bb2-40d4-b799-07e057f82540"}},"connectionId":45882,"connectionCount":42}} {"t":{"$date":"2026-02-12T08:53:08.035+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn45882","msg":"client metadata","attr":{"remote":"10.244.4.139:52176","client":"conn45882","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T08:53:08.035+00:00"},"s":"I", "c":"ACCESS", "id":6788604, "ctx":"conn45882","msg":"Auth metrics report","attr":{"metric":"acquireUser","micros":0}} {"t":{"$date":"2026-02-12T08:53:08.137+00:00"},"s":"I", "c":"ACCESS", "id":5286306, "ctx":"conn45882","msg":"Successfully authenticated","attr":{"client":"10.244.4.139:52176","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","result":0,"metrics":{"conversation_duration":{"micros":101834,"summary":{"0":{"step":1,"step_total":2,"duration_micros":57},"1":{"step":2,"step_total":2,"duration_micros":32}}}},"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"},"extraInfo":{}}} {"t":{"$date":"2026-02-12T08:53:08.137+00:00"},"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn45882","msg":"Received first command on ingress connection since session start or auth handshake","attr":{"elapsedMillis":0}} {"t":{"$date":"2026-02-12T08:53:08.138+00:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn45881","msg":"Interrupted operation as its client disconnected","attr":{"opId":47108097}} {"t":{"$date":"2026-02-12T08:53:08.138+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45882","msg":"Connection ended","attr":{"remote":"10.244.4.139:52176","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"713bb020-9bb2-40d4-b799-07e057f82540"}},"connectionId":45882,"connectionCount":41}} {"t":{"$date":"2026-02-12T08:53:08.138+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45880","msg":"Connection ended","attr":{"remote":"10.244.4.139:52152","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"771f0dce-cc9c-4908-937e-76e4870cc6d5"}},"connectionId":45880,"connectionCount":40}} {"t":{"$date":"2026-02-12T08:53:08.138+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn45881","msg":"Connection ended","attr":{"remote":"10.244.4.139:52164","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"53aff018-24ea-47c4-a613-7dfe779ed547"}},"connectionId":45881,"connectionCount":39}} ==> /data/mongodb/logs/mongodb.log.2026-02-12T07-54-04 <== {"t":{"$date":"2026-02-12T07:53:29.134+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24361","msg":"Connection ended","attr":{"remote":"10.244.4.84:59972","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"f711b6ec-ac1b-4c93-b0e1-9ee3e15eaf5d"}},"connectionId":24361,"connectionCount":11}} {"t":{"$date":"2026-02-12T07:53:29.447+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.5.90:38014","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"2162afd9-db83-4173-b6df-76b10ea832e7"}},"connectionId":24362,"connectionCount":12}} {"t":{"$date":"2026-02-12T07:53:29.447+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.5.90:38030","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"46d00c16-c874-47e4-9f11-56e7c4a1d702"}},"connectionId":24363,"connectionCount":13}} {"t":{"$date":"2026-02-12T07:53:29.447+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24362","msg":"client metadata","attr":{"remote":"10.244.5.90:38014","client":"conn24362","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T07:53:29.447+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24363","msg":"client metadata","attr":{"remote":"10.244.5.90:38030","client":"conn24363","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T07:53:29.448+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24363","msg":"Connection ended","attr":{"remote":"10.244.5.90:38030","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"46d00c16-c874-47e4-9f11-56e7c4a1d702"}},"connectionId":24363,"connectionCount":12}} {"t":{"$date":"2026-02-12T07:53:29.449+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24362","msg":"Connection ended","attr":{"remote":"10.244.5.90:38014","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"2162afd9-db83-4173-b6df-76b10ea832e7"}},"connectionId":24362,"connectionCount":11}} {"t":{"$date":"2026-02-12T07:53:29.498+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.5.90:38044","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"2c09d2c3-9c12-46f8-af85-6d308518d890"}},"connectionId":24364,"connectionCount":12}} {"t":{"$date":"2026-02-12T07:53:29.498+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.5.90:38042","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"9914ed0e-1fac-4241-9829-42d538120221"}},"connectionId":24365,"connectionCount":13}} {"t":{"$date":"2026-02-12T07:53:29.499+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24364","msg":"client metadata","attr":{"remote":"10.244.5.90:38044","client":"conn24364","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T07:53:29.499+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24365","msg":"client metadata","attr":{"remote":"10.244.5.90:38042","client":"conn24365","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T07:53:29.500+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24365","msg":"Connection ended","attr":{"remote":"10.244.5.90:38042","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"9914ed0e-1fac-4241-9829-42d538120221"}},"connectionId":24365,"connectionCount":12}} {"t":{"$date":"2026-02-12T07:53:29.500+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24364","msg":"Connection ended","attr":{"remote":"10.244.5.90:38044","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"2c09d2c3-9c12-46f8-af85-6d308518d890"}},"connectionId":24364,"connectionCount":11}} {"t":{"$date":"2026-02-12T07:53:29.947+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.5.90:38052","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"172bc06d-0e3f-4043-9921-bc9d2c7d41aa"}},"connectionId":24366,"connectionCount":12}} {"t":{"$date":"2026-02-12T07:53:29.947+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24366","msg":"client metadata","attr":{"remote":"10.244.5.90:38052","client":"conn24366","negotiatedCompressors":["snappy","zstd","zlib"],"doc":{"driver":{"name":"NetworkInterfaceTL-MirrorMaestro","version":"8.0.17-6"},"os":{"type":"Linux","name":"Red Hat Enterprise Linux release 9.7 (Plow)","architecture":"x86_64","version":"Kernel 5.15.0-1102-azure"}}}} {"t":{"$date":"2026-02-12T07:53:29.948+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24366","msg":"Connection ended","attr":{"remote":"10.244.5.90:38052","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"172bc06d-0e3f-4043-9921-bc9d2c7d41aa"}},"connectionId":24366,"connectionCount":11}} {"t":{"$date":"2026-02-12T07:53:30.074+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.4.84:59982","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"e15d360c-cd1c-44dc-87d2-c31c0b859373"}},"connectionId":24367,"connectionCount":12}} {"t":{"$date":"2026-02-12T07:53:30.074+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.4.84:59992","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"d34488fb-dda4-4abc-821a-1d546f2224fc"}},"connectionId":24368,"connectionCount":13}} {"t":{"$date":"2026-02-12T07:53:30.074+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24367","msg":"client metadata","attr":{"remote":"10.244.4.84:59982","client":"conn24367","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T07:53:30.075+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24367","msg":"Connection ended","attr":{"remote":"10.244.4.84:59982","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"e15d360c-cd1c-44dc-87d2-c31c0b859373"}},"connectionId":24367,"connectionCount":12}} {"t":{"$date":"2026-02-12T07:53:30.103+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.4.84:60004","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"28e99d61-62f9-4980-a511-03d04a37d17b"}},"connectionId":24369,"connectionCount":13}} {"t":{"$date":"2026-02-12T07:53:30.103+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"10.244.4.84:60006","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"2383ab37-4d30-4f08-ac33-f2a7e319b5a2"}},"connectionId":24370,"connectionCount":14}} {"t":{"$date":"2026-02-12T07:53:30.104+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24368","msg":"client metadata","attr":{"remote":"10.244.4.84:59992","client":"conn24368","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T07:53:30.105+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24368","msg":"Connection ended","attr":{"remote":"10.244.4.84:59992","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"d34488fb-dda4-4abc-821a-1d546f2224fc"}},"connectionId":24368,"connectionCount":13}} {"t":{"$date":"2026-02-12T07:53:30.105+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24370","msg":"client metadata","attr":{"remote":"10.244.4.84:60006","client":"conn24370","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T07:53:30.106+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn24369","msg":"client metadata","attr":{"remote":"10.244.4.84:60004","client":"conn24369","negotiatedCompressors":[],"doc":{"driver":{"name":"mongo-go-driver","version":"v1.11.6"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.24.10"}}} {"t":{"$date":"2026-02-12T07:53:30.112+00:00"},"s":"I", "c":"CONNPOOL", "id":22576, "ctx":"ReplNetwork","msg":"Connecting","attr":{"hostAndPort":"mongodb-trwkwn-mongodb-4.mongodb-trwkwn-mongodb-headless.ns-gtubu.svc:27017"}} {"t":{"$date":"2026-02-12T07:53:30.130+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24370","msg":"Connection ended","attr":{"remote":"10.244.4.84:60006","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"2383ab37-4d30-4f08-ac33-f2a7e319b5a2"}},"connectionId":24370,"connectionCount":12}} {"t":{"$date":"2026-02-12T07:53:30.131+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn24369","msg":"Connection ended","attr":{"remote":"10.244.4.84:60004","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"28e99d61-62f9-4980-a511-03d04a37d17b"}},"connectionId":24369,"connectionCount":11}} {"t":{"$date":"2026-02-12T07:53:30.177+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn911","msg":"Connection ended","attr":{"remote":"127.0.0.1:45076","isLoadBalanced":false,"uuid":{"uuid":{"$uuid":"1a02f9f4-cb74-4444-89e3-a90050454421"}},"connectionId":911,"connectionCount":10}} ==> /data/mongodb/logs/mongodb.log.2026-02-12T08-03-12 <== {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":11212204,"ctx":"OplogCapMaintainerThread-local.oplog.rs","msg":"OplogCapMaintainerThread interrupted","attr":{"reason":"interrupted at shutdown"}} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":11212204,"ctx":"OplogCapMaintainerThread-local.oplog.rs","msg":"OplogCapMaintainerThread interrupted","attr":{"reason":"interrupted at shutdown"}} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":7474902, "ctx":"SignalHandler","msg":"Shutting down oplog cap maintainer thread","attr":{"reason":{"code":91,"codeName":"ShutdownInProgress","errmsg":"The storage catalog is being closed."}}} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":11212204,"ctx":"OplogCapMaintainerThread-local.oplog.rs","msg":"OplogCapMaintainerThread interrupted","attr":{"reason":"interrupted at shutdown"}} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":7474901, "ctx":"SignalHandler","msg":"Finished shutting down oplog cap maintainer thread"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"} {"t":{"$date":"2026-02-12T07:55:30.580+00:00"},"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"} {"t":{"$date":"2026-02-12T07:55:30.581+00:00"},"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."} {"t":{"$date":"2026-02-12T07:55:30.581+00:00"},"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"} {"t":{"$date":"2026-02-12T07:55:30.581+00:00"},"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"} {"t":{"$date":"2026-02-12T07:55:30.581+00:00"},"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"} {"t":{"$date":"2026-02-12T07:55:30.584+00:00"},"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}} {"t":{"$date":"2026-02-12T07:55:30.587+00:00"},"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1770882930,"ts_usec":587940,"thread":"14:0x7f443ef0f640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1000000,"category_id":34,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"}}} {"t":{"$date":"2026-02-12T07:55:30.591+00:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1770882930,"ts_usec":591181,"thread":"14:0x7f443ef0f640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","log_id":1000000,"category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 385, snapshot max: 385 snapshot count: 0, oldest timestamp: (1770882627, 2) , meta checkpoint timestamp: (1770882927, 2) base write gen: 353"}}} {"t":{"$date":"2026-02-12T07:55:30.675+00:00"},"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1770882930,"ts_usec":675785,"thread":"14:0x7f443ef0f640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1000000,"category_id":34,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 87 milliseconds"}}} {"t":{"$date":"2026-02-12T07:55:30.676+00:00"},"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1770882930,"ts_usec":676248,"thread":"14:0x7f443ef0f640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1493200,"category_id":34,"verbose_level":"INFO","verbose_level_id":0,"msg":"shutdown was completed successfully and took 91ms, including 2ms for the rollback to stable, and 87ms for the checkpoint."}}} {"t":{"$date":"2026-02-12T07:55:30.705+00:00"},"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":121}} {"t":{"$date":"2026-02-12T07:55:30.705+00:00"},"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."} {"t":{"$date":"2026-02-12T07:55:30.705+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"} {"t":{"$date":"2026-02-12T07:55:30.705+00:00"},"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"} {"t":{"$date":"2026-02-12T07:55:30.714+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"} {"t":{"$date":"2026-02-12T07:55:30.715+00:00"},"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":{"Summary of time elapsed":{"Statistics":{"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"15001 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"0 ms","Shut down WaitForMajorityService":"1 ms","Shut down the logical session cache":"0 ms","Shut down the Query Analysis Sampler":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"1 ms","Shut down the thread that aborts expired transactions":"0 ms","Shut down the replica set aware services":"0 ms","Shut down replication":"0 ms","Shut down external state":"2 ms","Shut down replication executor":"1 ms","Join replication executor":"0 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the transport layer":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"125 ms","Shut down full-time data capture":"0 ms","Shut down online certificate status protocol manager":"0 ms","shutdownTask total elapsed time":"15144 ms"}}}} {"t":{"$date":"2026-02-12T07:55:30.715+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}} ==> /data/mongodb/logs/mongodb.log.2026-02-12T08-08-11 <== {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":11212204,"ctx":"OplogCapMaintainerThread-local.oplog.rs","msg":"OplogCapMaintainerThread interrupted","attr":{"reason":"interrupted at shutdown"}} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":11212204,"ctx":"OplogCapMaintainerThread-local.oplog.rs","msg":"OplogCapMaintainerThread interrupted","attr":{"reason":"interrupted at shutdown"}} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":7474902, "ctx":"SignalHandler","msg":"Shutting down oplog cap maintainer thread","attr":{"reason":{"code":91,"codeName":"ShutdownInProgress","errmsg":"The storage catalog is being closed."}}} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":11212204,"ctx":"OplogCapMaintainerThread-local.oplog.rs","msg":"OplogCapMaintainerThread interrupted","attr":{"reason":"interrupted at shutdown"}} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":7474901, "ctx":"SignalHandler","msg":"Finished shutting down oplog cap maintainer thread"} {"t":{"$date":"2026-02-12T08:07:43.792+00:00"},"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"} {"t":{"$date":"2026-02-12T08:07:43.793+00:00"},"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"} {"t":{"$date":"2026-02-12T08:07:43.793+00:00"},"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."} {"t":{"$date":"2026-02-12T08:07:43.793+00:00"},"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"} {"t":{"$date":"2026-02-12T08:07:43.793+00:00"},"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"} {"t":{"$date":"2026-02-12T08:07:43.793+00:00"},"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"} {"t":{"$date":"2026-02-12T08:07:43.796+00:00"},"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}} {"t":{"$date":"2026-02-12T08:07:43.799+00:00"},"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1770883663,"ts_usec":799801,"thread":"18:0x7fa364c02640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1000000,"category_id":34,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"}}} {"t":{"$date":"2026-02-12T08:07:43.830+00:00"},"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1770883663,"ts_usec":830870,"thread":"18:0x7fa364c02640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","log_id":1000000,"category_id":7,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 1506, snapshot max: 1506 snapshot count: 0, oldest timestamp: (1770883362, 4) , meta checkpoint timestamp: (1770883662, 4) base write gen: 371"}}} {"t":{"$date":"2026-02-12T08:07:43.903+00:00"},"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1770883663,"ts_usec":903490,"thread":"18:0x7fa364c02640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1000000,"category_id":34,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 103 milliseconds"}}} {"t":{"$date":"2026-02-12T08:07:43.903+00:00"},"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":{"ts_sec":1770883663,"ts_usec":903703,"thread":"18:0x7fa364c02640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","log_id":1493200,"category_id":34,"verbose_level":"INFO","verbose_level_id":0,"msg":"shutdown was completed successfully and took 107ms, including 2ms for the rollback to stable, and 103ms for the checkpoint."}}} {"t":{"$date":"2026-02-12T08:07:43.932+00:00"},"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":136}} {"t":{"$date":"2026-02-12T08:07:43.932+00:00"},"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."} {"t":{"$date":"2026-02-12T08:07:43.932+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"} {"t":{"$date":"2026-02-12T08:07:43.932+00:00"},"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"} {"t":{"$date":"2026-02-12T08:07:43.940+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"} {"t":{"$date":"2026-02-12T08:07:43.941+00:00"},"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":{"Summary of time elapsed":{"Statistics":{"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"15000 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"0 ms","Shut down WaitForMajorityService":"0 ms","Shut down the logical session cache":"1 ms","Shut down the Query Analysis Sampler":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"2 ms","Shut down the thread that aborts expired transactions":"0 ms","Shut down the replica set aware services":"0 ms","Shut down replication":"0 ms","Shut down external state":"1 ms","Shut down replication executor":"0 ms","Join replication executor":"1 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"1 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the transport layer":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"140 ms","Shut down full-time data capture":"1 ms","Shut down online certificate status protocol manager":"0 ms","shutdownTask total elapsed time":"15155 ms"}}}} {"t":{"$date":"2026-02-12T08:07:43.941+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}} delete cluster mongodb-trwkwn  `kbcli cluster delete mongodb-trwkwn --auto-approve --namespace ns-gtubu `(B  pod_info:mongodb-trwkwn-mongodb-0 4/4 Running 0 29m mongodb-trwkwn-mongodb-1 4/4 Running 0 43m mongodb-trwkwn-mongodb-2 4/4 Running 0 45m Cluster mongodb-trwkwn deleted pod_info:mongodb-trwkwn-mongodb-2 4/4 Terminating 0 45m delete cluster pod done(B check cluster resource non-exist OK: pvc(B delete cluster done(B check resource cm non exists check resource cm non exists(B Mongodb Test Suite All Done!(B Test Engine: mongodb Test Type: 6 --------------------------------------Mongodb 8.0.17 (Topology = replicaset Replicas 3) Test Result-------------------------------------- [PASSED]|[Create]|[Topology=replicaset;ComponentDefinition=mongodb-1.0.2;ComponentVersion=mongodb;ServiceVersion=8.0.17;]|[Description=Create a cluster with the specified topology replicaset with the specified component definition mongodb-1.0.2 and component version mongodb and service version 8.0.17](B [PASSED]|[Connect]|[ComponentName=mongodb]|[Description=Connect to the cluster](B [PASSED]|[AddData]|[Values=jyjav]|[Description=Add data to the cluster](B [PASSED]|[CheckAddDataReadonly]|[Values=jyjav;Role=Readonly]|[Description=Add data to the cluster readonly](B [PASSED]|[Connect]|[Endpoints=true]|[Description=Connect to the cluster](B [PASSED]|[Update]|[Monitor=true]|[Description=Update the cluster Monitor enable](B [PASSED]|[HorizontalScaling Out]|[ComponentName=mongodb]|[Description=HorizontalScaling Out the cluster specify component mongodb](B [PASSED]|[HorizontalScaling In]|[ComponentName=mongodb]|[Description=HorizontalScaling In the cluster specify component mongodb](B [PASSED]|[Failover]|[HA=Delete Pod;ComponentName=mongodb]|[Description=Simulates conditions where pods terminating forced/graceful thereby testing deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application.](B [PASSED]|[VolumeExpansion]|[ComponentName=mongodb]|[Description=VolumeExpansion the cluster specify component mongodb](B [PASSED]|[VerticalScaling]|[ComponentName=mongodb]|[Description=VerticalScaling the cluster specify component mongodb](B [PASSED]|[Stop]|[-]|[Description=Stop the cluster](B [PASSED]|[Start]|[-]|[Description=Start the cluster](B [PASSED]|[SwitchOver]|[ComponentName=mongodb]|[Description=SwitchOver the cluster specify component mongodb](B [PASSED]|[Failover]|[HA=Kill 1;ComponentName=mongodb]|[Description=Simulates conditions where process 1 killed either due to expected/undesired processes thereby testing the application's resilience to unavailability of some replicas due to abnormal termination signals.](B [PASSED]|[Restart]|[-]|[Description=Restart the cluster](B [PASSED]|[RebuildInstance]|[ComponentName=mongodb]|[Description=Rebuild the cluster instance specify component mongodb](B [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut](B [PASSED]|[Backup]|[BackupMethod=datafile]|[Description=The cluster datafile Backup](B [PASSED]|[Restore]|[BackupMethod=datafile]|[Description=The cluster datafile Restore](B [PASSED]|[Connect]|[ComponentName=mongodb]|[Description=Connect to the cluster](B [PASSED]|[Delete Restore Cluster]|[BackupMethod=datafile]|[Description=Delete the datafile restore cluster](B [PASSED]|[RebuildInstance]|[ComponentName=mongodb]|[Description=Rebuild the cluster instance specify component mongodb](B [PASSED]|[Backup]|[BackupMethod=pbm-physical]|[Description=The cluster pbm-physical Backup](B [PASSED]|[Restore]|[BackupMethod=pbm-physical]|[Description=The cluster pbm-physical Restore](B [FAILED]|[Connect]|[ComponentName=mongodb]|[Description=Connect to the cluster](B [PASSED]|[Delete Restore Cluster]|[BackupMethod=pbm-physical]|[Description=Delete the pbm-physical restore cluster](B [PASSED]|[Backup]|[BackupMethod=dump]|[Description=The cluster dump Backup](B [PASSED]|[Restore]|[BackupMethod=dump]|[Description=The cluster dump Restore](B [PASSED]|[Check Data]|[BackupMethod=dump]|[Description=Check the cluster data restore via dump](B [PASSED]|[Connect]|[ComponentName=mongodb]|[Description=Connect to the cluster](B [PASSED]|[Delete Restore Cluster]|[BackupMethod=dump]|[Description=Delete the dump restore cluster](B [PASSED]|[Delete]|[-]|[Description=Delete the cluster](B [END]