source commons files source engines files source kubeblocks files `kubectl get namespace | grep ns-jcrws ` `kubectl create namespace ns-jcrws` namespace/ns-jcrws created create namespace ns-jcrws done download kbcli `gh release list --repo apecloud/kbcli --limit 100 | (grep "1.0" || true)` `curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash -s v1.0.0` Your system is linux_amd64 Installing kbcli ... Downloading ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 33.6M 100 33.6M 0 0 139M 0 --:--:-- --:--:-- --:--:-- 139M kbcli installed successfully. Kubernetes: v1.32.3-eks-4096722 KubeBlocks: 1.0.0 kbcli: 1.0.0 Make sure your docker service is running and begin your journey with kbcli: kbcli playground init For more information on how to get started, please visit: https://kubeblocks.io download kbcli v1.0.0 done Kubernetes: v1.32.3-eks-4096722 KubeBlocks: 1.0.0 kbcli: 1.0.0 Kubernetes Env: v1.32.3-eks-4096722 check snapshot controller check snapshot controller done eks default-vsc found POD_RESOURCES: No resources found found default storage class: gp3 KubeBlocks version is:1.0.0 skip upgrade KubeBlocks current KubeBlocks version: 1.0.0 Error: no repositories to show helm repo add chaos-mesh https://charts.chaos-mesh.org "chaos-mesh" has been added to your repositories add helm chart repo chaos-mesh success chaos mesh already installed check component definition set component name:mongodb set component version set component version:mongodb set service versions:8.0.8,8.0.6,8.0.4,7.0.19,7.0.16,7.0.12,6.0.22,6.0.20,6.0.16,5.0.30,5.0.28,4.4.29,4.2.24,4.0.28 set service versions sorted:4.0.28,4.2.24,4.4.29,5.0.28,5.0.30,6.0.16,6.0.20,6.0.22,7.0.12,7.0.16,7.0.19,8.0.4,8.0.6,8.0.8 set mongodb component definition set mongodb component definition mongodb-1.0.0-alpha.0 set replicas first:3,4.0.28|3,4.2.24|3,4.4.29|3,5.0.28|3,5.0.30|3,6.0.16|3,6.0.20|3,6.0.22|3,7.0.12|3,7.0.16|3,7.0.19|3,8.0.4|3,8.0.6|3,8.0.8 set replicas third:3,7.0.12 set replicas fourth:3,7.0.12 set minimum cmpv service version set minimum cmpv service version replicas:3,7.0.12 REPORT_COUNT:1 CLUSTER_TOPOLOGY: set cluster topology: replicaset set mongodb component definition set mongodb component definition mongodb-1.0.0-alpha.0 LIMIT_CPU:0.1 LIMIT_MEMORY:0.5 storage size: 3 No resources found in ns-jcrws namespace. termination_policy:DoNotTerminate create 3 replica DoNotTerminate mongodb cluster check component definition set component definition by component version check cmpd by labels check cmpd by compDefs set component definition: mongodb-1.0.0-alpha.0 by component version:mongodb apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: mongodb-mvdokf namespace: ns-jcrws spec: clusterDef: mongodb topology: replicaset terminationPolicy: DoNotTerminate componentSpecs: - name: mongodb serviceVersion: 7.0.12 replicas: 3 resources: limits: cpu: 100m memory: 0.5Gi requests: cpu: 100m memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi `kubectl apply -f test_create_mongodb-mvdokf.yaml` cluster.apps.kubeblocks.io/mongodb-mvdokf created apply test_create_mongodb-mvdokf.yaml Success `rm -rf test_create_mongodb-mvdokf.yaml` check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate May 27,2025 18:35 UTC+0800 clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-12-241.us-west-2.compute.internal/172.31.12.241 May 27,2025 18:35 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-4-125.us-west-2.compute.internal/172.31.4.125 May 27,2025 18:36 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-2-234.us-west-2.compute.internal/172.31.2.234 May 27,2025 18:36 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: check pod mongodb-mvdokf-mongodb-0 container_name mongodb exist password 9T2528tMs88Kt1zU Container mongodb logs contain secret password:2025-05-27T10:36:35Z INFO MongoDB Create user: root, passwd: 9T2528tMs88Kt1zU, roles: map[db:admin role:root] describe cluster `kbcli cluster describe mongodb-mvdokf --namespace ns-jcrws ` Name: mongodb-mvdokf Created Time: May 27,2025 18:35 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY ns-jcrws mongodb replicaset Running DoNotTerminate Endpoints: COMPONENT INTERNAL EXTERNAL mongodb mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017 mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017 Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME mongodb 7.0.12 mongodb-mvdokf-mongodb-0 primary Running us-west-2a ip-172-31-12-241.us-west-2.compute.internal/172.31.12.241 May 27,2025 18:35 UTC+0800 mongodb 7.0.12 mongodb-mvdokf-mongodb-1 secondary Running us-west-2a ip-172-31-4-125.us-west-2.compute.internal/172.31.4.125 May 27,2025 18:36 UTC+0800 mongodb 7.0.12 mongodb-mvdokf-mongodb-2 secondary Running us-west-2a ip-172-31-2-234.us-west-2.compute.internal/172.31.2.234 May 27,2025 18:36 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS mongodb 100m / 100m 512Mi / 512Mi data:3Gi kb-default-sc Images: COMPONENT COMPONENT-DEFINITION IMAGE mongodb mongodb-1.0.0-alpha.0 docker.io/apecloud/mongo:7.0.12 docker.io/apecloud/kubeblocks-tools:1.0.0 Data Protection: BACKUP-REPO AUTO-BACKUP BACKUP-SCHEDULE BACKUP-METHOD BACKUP-RETENTION RECOVERABLE-TIME Show cluster events: kbcli cluster list-events -n ns-jcrws mongodb-mvdokf `kbcli cluster label mongodb-mvdokf app.kubernetes.io/instance- --namespace ns-jcrws ` label "app.kubernetes.io/instance" not found. `kbcli cluster label mongodb-mvdokf app.kubernetes.io/instance=mongodb-mvdokf --namespace ns-jcrws ` `kbcli cluster label mongodb-mvdokf --list --namespace ns-jcrws ` NAME NAMESPACE LABELS mongodb-mvdokf ns-jcrws app.kubernetes.io/instance=mongodb-mvdokf clusterdefinition.kubeblocks.io/name=mongodb label cluster app.kubernetes.io/instance=mongodb-mvdokf Success `kbcli cluster label case.name=kbcli.test1 -l app.kubernetes.io/instance=mongodb-mvdokf --namespace ns-jcrws ` `kbcli cluster label mongodb-mvdokf --list --namespace ns-jcrws ` NAME NAMESPACE LABELS mongodb-mvdokf ns-jcrws app.kubernetes.io/instance=mongodb-mvdokf case.name=kbcli.test1 clusterdefinition.kubeblocks.io/name=mongodb label cluster case.name=kbcli.test1 Success `kbcli cluster label mongodb-mvdokf case.name=kbcli.test2 --overwrite --namespace ns-jcrws ` `kbcli cluster label mongodb-mvdokf --list --namespace ns-jcrws ` NAME NAMESPACE LABELS mongodb-mvdokf ns-jcrws app.kubernetes.io/instance=mongodb-mvdokf case.name=kbcli.test2 clusterdefinition.kubeblocks.io/name=mongodb label cluster case.name=kbcli.test2 Success `kbcli cluster label mongodb-mvdokf case.name- --namespace ns-jcrws ` `kbcli cluster label mongodb-mvdokf --list --namespace ns-jcrws ` NAME NAMESPACE LABELS mongodb-mvdokf ns-jcrws app.kubernetes.io/instance=mongodb-mvdokf clusterdefinition.kubeblocks.io/name=mongodb delete cluster label case.name Success list-accounts on characterType mongodb is not supported yet cluster connect `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo " echo \"rs.status()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835965bc4d95b97d55e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:36:27.562+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:36:27.562+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> *** set: 'mongodb-mvdokf-mongodb', date: ISODate('2025-05-27T10:39:50.058Z'), myState: 1, term: Long('1'), syncSourceHost: '', syncSourceId: -1, heartbeatIntervalMillis: Long('2000'), majorityVoteCount: 2, writeMajorityCount: 2, votingMembersCount: 3, writableVotingMembersCount: 3, optimes: *** lastCommittedOpTime: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, lastCommittedWallTime: ISODate('2025-05-27T10:39:41.158Z'), readConcernMajorityOpTime: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, appliedOpTime: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, durableOpTime: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, lastAppliedWallTime: ISODate('2025-05-27T10:39:41.158Z'), lastDurableWallTime: ISODate('2025-05-27T10:39:41.158Z') ***, lastStableRecoveryTimestamp: Timestamp(*** t: 1748342360, i: 1 ***), electionCandidateMetrics: *** lastElectionReason: 'electionTimeout', lastElectionDate: ISODate('2025-05-27T10:36:30.159Z'), electionTerm: Long('1'), lastCommittedOpTimeAtElection: *** ts: Timestamp(*** t: 1748342189, i: 1 ***), t: Long('-1') ***, lastSeenOpTimeAtElection: *** ts: Timestamp(*** t: 1748342189, i: 1 ***), t: Long('-1') ***, numVotesNeeded: 1, priorityAtElection: 2, electionTimeoutMillis: Long('10000'), newTermStartDate: ISODate('2025-05-27T10:36:30.477Z'), wMajorityWriteAvailabilityDate: ISODate('2025-05-27T10:36:30.667Z') ***, members: [ *** _id: 0, name: 'mongodb-mvdokf-mongodb-0.mongodb-mvdokf-mongodb-headless.ns-jcrws.svc:27017', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 214, optime: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, optimeDate: ISODate('2025-05-27T10:39:41.000Z'), lastAppliedWallTime: ISODate('2025-05-27T10:39:41.158Z'), lastDurableWallTime: ISODate('2025-05-27T10:39:41.158Z'), syncSourceHost: '', syncSourceId: -1, infoMessage: '', electionTime: Timestamp(*** t: 1748342190, i: 1 ***), electionDate: ISODate('2025-05-27T10:36:30.000Z'), configVersion: 5, configTerm: 1, self: true, lastHeartbeatMessage: '' ***, *** _id: 1, name: 'mongodb-mvdokf-mongodb-1.mongodb-mvdokf-mongodb-headless.ns-jcrws.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 170, optime: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, optimeDurable: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, optimeDate: ISODate('2025-05-27T10:39:41.000Z'), optimeDurableDate: ISODate('2025-05-27T10:39:41.000Z'), lastAppliedWallTime: ISODate('2025-05-27T10:39:41.158Z'), lastDurableWallTime: ISODate('2025-05-27T10:39:41.158Z'), lastHeartbeat: ISODate('2025-05-27T10:39:48.865Z'), lastHeartbeatRecv: ISODate('2025-05-27T10:39:48.864Z'), pingMs: Long('45'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-mvdokf-mongodb-0.mongodb-mvdokf-mongodb-headless.ns-jcrws.svc:27017', syncSourceId: 0, infoMessage: '', configVersion: 5, configTerm: 1 ***, *** _id: 2, name: 'mongodb-mvdokf-mongodb-2.mongodb-mvdokf-mongodb-headless.ns-jcrws.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 140, optime: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, optimeDurable: *** ts: Timestamp(*** t: 1748342381, i: 1 ***), t: Long('1') ***, optimeDate: ISODate('2025-05-27T10:39:41.000Z'), optimeDurableDate: ISODate('2025-05-27T10:39:41.000Z'), lastAppliedWallTime: ISODate('2025-05-27T10:39:41.158Z'), lastDurableWallTime: ISODate('2025-05-27T10:39:41.158Z'), lastHeartbeat: ISODate('2025-05-27T10:39:48.865Z'), lastHeartbeatRecv: ISODate('2025-05-27T10:39:48.865Z'), pingMs: Long('45'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-mvdokf-mongodb-0.mongodb-mvdokf-mongodb-headless.ns-jcrws.svc:27017', syncSourceId: 0, infoMessage: '', configVersion: 5, configTerm: 1 *** ], ok: 1, '$clusterTime': *** clusterTime: Timestamp(*** t: 1748342381, i: 1 ***), signature: *** hash: Binary.createFromBase64('nZ4Zf1uU6S0LqkN/mg24VQcsYhM=', 0), keyId: Long('7509072528267018246') *** ***, operationTime: Timestamp(*** t: 1748342381, i: 1 ***) *** mongodb-mvdokf-mongodb [direct: primary] admin> connect cluster Success insert batch data by db client Error from server (NotFound): pods "test-db-client-executionloop-mongodb-mvdokf" not found DB_CLIENT_BATCH_DATA_COUNT: `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-mongodb-mvdokf --namespace ns-jcrws ` Error from server (NotFound): pods "test-db-client-executionloop-mongodb-mvdokf" not found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): pods "test-db-client-executionloop-mongodb-mvdokf" not found `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: apiVersion: v1 kind: Pod metadata: name: test-db-client-executionloop-mongodb-mvdokf namespace: ns-jcrws spec: containers: - name: test-dbclient imagePullPolicy: IfNotPresent image: docker.io/apecloud/dbclient:test args: - "--host" - "mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local" - "--user" - "root" - "--password" - "9T2528tMs88Kt1zU" - "--port" - "27017" - "--dbtype" - "mongodb" - "--test" - "executionloop" - "--duration" - "60" - "--interval" - "1" restartPolicy: Never `kubectl apply -f test-db-client-executionloop-mongodb-mvdokf.yaml` pod/test-db-client-executionloop-mongodb-mvdokf created apply test-db-client-executionloop-mongodb-mvdokf.yaml Success `rm -rf test-db-client-executionloop-mongodb-mvdokf.yaml` check pod status pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 0/1 ContainerCreating 0 6s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 0/1 ContainerCreating 0 11s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 17s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 23s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 28s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 34s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 40s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 46s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 52s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 58s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 64s pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 1/1 Running 0 70s check pod test-db-client-executionloop-mongodb-mvdokf status done pod_status:NAME READY STATUS RESTARTS AGE test-db-client-executionloop-mongodb-mvdokf 0/1 Completed 0 76s check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-12-241.us-west-2.compute.internal/172.31.12.241 May 27,2025 18:35 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-4-125.us-west-2.compute.internal/172.31.4.125 May 27,2025 18:36 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-2-234.us-west-2.compute.internal/172.31.2.234 May 27,2025 18:36 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done 10:41:10.165 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 388, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342470, "i": 1***, "signature": ***"hash": ***"$binary": ***"base64": "g1jbJSro6Od5lt7Ylo/mxYfMYNY=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c6fc715e2d1e5457db"***, "value": "executions_loop_test_388"***]***' with request id 399 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:10.329 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 399 completed successfully in 164.10 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c6fc715e2d1e5457db*** 10:41:10.330 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 389, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342470, "i": 2***, "signature": ***"hash": ***"$binary": ***"base64": "g1jbJSro6Od5lt7Ylo/mxYfMYNY=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c6fc715e2d1e5457dc"***, "value": "executions_loop_test_389"***]***' with request id 400 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:10.459 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 400 completed successfully in 128.76 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c6fc715e2d1e5457dc*** [ 53s ] executions total: 389 successful: 389 failed: 0 disconnect: 0 10:41:10.460 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 390, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342470, "i": 3***, "signature": ***"hash": ***"$binary": ***"base64": "g1jbJSro6Od5lt7Ylo/mxYfMYNY=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c6fc715e2d1e5457dd"***, "value": "executions_loop_test_390"***]***' with request id 401 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:10.659 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 401 completed successfully in 199.73 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c6fc715e2d1e5457dd*** 10:41:10.661 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 391, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342470, "i": 4***, "signature": ***"hash": ***"$binary": ***"base64": "g1jbJSro6Od5lt7Ylo/mxYfMYNY=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c6fc715e2d1e5457de"***, "value": "executions_loop_test_391"***]***' with request id 402 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:10.958 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 402 completed successfully in 297.97 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c6fc715e2d1e5457de*** 10:41:10.960 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 392, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342470, "i": 5***, "signature": ***"hash": ***"$binary": ***"base64": "g1jbJSro6Od5lt7Ylo/mxYfMYNY=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c6fc715e2d1e5457df"***, "value": "executions_loop_test_392"***]***' with request id 403 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.060 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 403 completed successfully in 99.99 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c6fc715e2d1e5457df*** 10:41:11.061 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 393, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342470, "i": 6***, "signature": ***"hash": ***"$binary": ***"base64": "g1jbJSro6Od5lt7Ylo/mxYfMYNY=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e0"***, "value": "executions_loop_test_393"***]***' with request id 404 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.160 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 404 completed successfully in 98.89 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e0*** 10:41:11.161 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 394, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 1***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e1"***, "value": "executions_loop_test_394"***]***' with request id 405 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.165 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 405 completed successfully in 4.18 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e1*** 10:41:11.166 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 395, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 2***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e2"***, "value": "executions_loop_test_395"***]***' with request id 406 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.231 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 406 completed successfully in 65.40 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e2*** 10:41:11.233 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 396, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 3***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e3"***, "value": "executions_loop_test_396"***]***' with request id 407 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.236 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 407 completed successfully in 3.84 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e3*** 10:41:11.237 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 397, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 4***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e4"***, "value": "executions_loop_test_397"***]***' with request id 408 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.459 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 408 completed successfully in 221.32 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e4*** [ 54s ] executions total: 397 successful: 397 failed: 0 disconnect: 0 10:41:11.460 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 398, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 5***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e5"***, "value": "executions_loop_test_398"***]***' with request id 409 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.659 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 409 completed successfully in 198.75 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e5*** 10:41:11.660 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 399, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 6***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e6"***, "value": "executions_loop_test_399"***]***' with request id 410 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.860 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 410 completed successfully in 200.35 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e6*** 10:41:11.861 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 400, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 7***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e7"***, "value": "executions_loop_test_400"***]***' with request id 411 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:11.959 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 411 completed successfully in 98.33 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e7*** 10:41:11.960 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 401, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 8***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c7fc715e2d1e5457e8"***, "value": "executions_loop_test_401"***]***' with request id 412 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:12.230 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 412 completed successfully in 269.62 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c7fc715e2d1e5457e8*** 10:41:12.231 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 402, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342471, "i": 9***, "signature": ***"hash": ***"$binary": ***"base64": "yhlniJ33kOBt2+TZbtPBhIpRPHI=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c8fc715e2d1e5457e9"***, "value": "executions_loop_test_402"***]***' with request id 413 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:12.235 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 413 completed successfully in 4.19 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c8fc715e2d1e5457e9*** 10:41:12.236 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 403, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342472, "i": 1***, "signature": ***"hash": ***"$binary": ***"base64": "zwhF0NN+8UFNGejKHuWNaWt0ba4=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c8fc715e2d1e5457ea"***, "value": "executions_loop_test_403"***]***' with request id 414 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:12.337 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 414 completed successfully in 101.06 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c8fc715e2d1e5457ea*** 10:41:12.338 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 404, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342472, "i": 2***, "signature": ***"hash": ***"$binary": ***"base64": "zwhF0NN+8UFNGejKHuWNaWt0ba4=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c8fc715e2d1e5457eb"***, "value": "executions_loop_test_404"***]***' with request id 415 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:12.564 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 415 completed successfully in 225.70 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c8fc715e2d1e5457eb*** [ 55s ] executions total: 404 successful: 404 failed: 0 disconnect: 0 10:41:12.565 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 405, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342472, "i": 3***, "signature": ***"hash": ***"$binary": ***"base64": "zwhF0NN+8UFNGejKHuWNaWt0ba4=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c8fc715e2d1e5457ec"***, "value": "executions_loop_test_405"***]***' with request id 416 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:12.860 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 416 completed successfully in 294.35 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c8fc715e2d1e5457ec*** 10:41:12.861 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 406, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342472, "i": 4***, "signature": ***"hash": ***"$binary": ***"base64": "zwhF0NN+8UFNGejKHuWNaWt0ba4=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c8fc715e2d1e5457ed"***, "value": "executions_loop_test_406"***]***' with request id 417 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:12.963 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 417 completed successfully in 101.67 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c8fc715e2d1e5457ed*** 10:41:12.964 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 407, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342472, "i": 5***, "signature": ***"hash": ***"$binary": ***"base64": "zwhF0NN+8UFNGejKHuWNaWt0ba4=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c8fc715e2d1e5457ee"***, "value": "executions_loop_test_407"***]***' with request id 418 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:13.062 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 418 completed successfully in 98.29 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c8fc715e2d1e5457ee*** 10:41:13.063 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 408, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342473, "i": 1***, "signature": ***"hash": ***"$binary": ***"base64": "XXSgu67bsv6ZjkDrFAct8tBIvtU=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c9fc715e2d1e5457ef"***, "value": "executions_loop_test_408"***]***' with request id 419 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:13.137 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 419 completed successfully in 74.28 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c9fc715e2d1e5457ef*** 10:41:13.139 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 409, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342473, "i": 2***, "signature": ***"hash": ***"$binary": ***"base64": "XXSgu67bsv6ZjkDrFAct8tBIvtU=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c9fc715e2d1e5457f0"***, "value": "executions_loop_test_409"***]***' with request id 420 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:13.144 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 420 completed successfully in 5.52 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c9fc715e2d1e5457f0*** 10:41:13.145 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"insert": "executions_loop_table", "ordered": true, "txnNumber": 410, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342473, "i": 3***, "signature": ***"hash": ***"$binary": ***"base64": "XXSgu67bsv6ZjkDrFAct8tBIvtU=", "subType": "00"***, "keyId": 7509072528267018246***, "lsid": ***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***, "documents": [***"_id": ***"$oid": "683596c9fc715e2d1e5457f1"***, "value": "executions_loop_test_410"***]***' with request id 421 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:13.231 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 421 completed successfully in 85.98 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 Inserted document: BsonObjectId***value=683596c9fc715e2d1e5457f1*** [ 60s ] executions total: 410 successful: 410 failed: 0 disconnect: 0 10:41:13.235 [main] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"endSessions": [***"id": ***"$binary": ***"base64": "Y/1nQMTWR6+bemZpy0s1vg==", "subType": "04"***], "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342473, "i": 4***, "signature": ***"hash": ***"$binary": ***"base64": "XXSgu67bsv6ZjkDrFAct8tBIvtU=", "subType": "00"***, "keyId": 7509072528267018246***, "$readPreference": ***"mode": "primaryPreferred"***' with request id 422 to database admin on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:13.236 [main] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 422 completed successfully in 1.03 ms on connection [connectionId***localValue:3, serverValue:3276***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:13.237 [cluster-rtt-ClusterId***value='6835968dfc715e2d1e545657', description='null'***-mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017] DEBUG org.mongodb.driver.protocol.command -- Sending command '***"hello": 1, "$db": "admin", "$clusterTime": ***"clusterTime": ***"$timestamp": ***"t": 1748342473, "i": 4***, "signature": ***"hash": ***"$binary": ***"base64": "XXSgu67bsv6ZjkDrFAct8tBIvtU=", "subType": "00"***, "keyId": 7509072528267018246***, "$readPreference": ***"mode": "primaryPreferred"***' with request id 423 to database admin on connection [connectionId***localValue:2, serverValue:1839***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:13.238 [main] DEBUG org.mongodb.driver.connection -- Closed connection [connectionId***localValue:3, serverValue:3276***] to mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 because the pool has been closed. 10:41:13.238 [main] DEBUG org.mongodb.driver.connection -- Closing connection connectionId***localValue:3, serverValue:3276*** 10:41:13.238 [cluster-rtt-ClusterId***value='6835968dfc715e2d1e545657', description='null'***-mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017] DEBUG org.mongodb.driver.protocol.command -- Execution of command with request id 423 completed successfully in 1.01 ms on connection [connectionId***localValue:2, serverValue:1839***] to server mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local:27017 10:41:13.239 [main] DEBUG org.mongodb.driver.connection -- Closing connection connectionId***localValue:1, serverValue:1838*** 10:41:13.239 [main] DEBUG org.mongodb.driver.connection -- Closing connection connectionId***localValue:2, serverValue:1839*** Test Result: Total Executions: 410 Successful Executions: 410 Failed Executions: 0 Disconnection Counts: 0 Connection Information: Database Type: mongodb Host: mongodb-mvdokf-mongodb.ns-jcrws.svc.cluster.local Port: 27017 Database: Table: User: root Org: Access Mode: mysql Test Type: executionloop Query: Duration: 60 seconds Interval: 1 seconds DB_CLIENT_BATCH_DATA_COUNT: 410 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge pods test-db-client-executionloop-mongodb-mvdokf --namespace ns-jcrws ` pod/test-db-client-executionloop-mongodb-mvdokf patched (no change) Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-db-client-executionloop-mongodb-mvdokf" force deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.remove(***) ; db.col.insertOne(***a:'sxxpu'***)\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835971021db4cde605e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:36:27.562+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:36:27.562+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> DeprecationWarning: Collection.remove() is deprecated. Use deleteOne, deleteMany, findOneAndDelete, or bulkWrite. *** acknowledged: true, insertedId: ObjectId('6835972a21db4cde605e739c') *** mongodb-mvdokf-mongodb [direct: primary] admin> add consistent data sxxpu Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.remove(***) ; db.col.insertOne(***a:'sxxpu'***)\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 683597480a5248cfbb5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T10:36:58.564+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:36:58.565+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> DeprecationWarning: Collection.remove() is deprecated. Use deleteOne, deleteMany, findOneAndDelete, or bulkWrite. Uncaught MongoServerError[NotWritablePrimary]: not primary admin> check add consistent data readonly Success skip cluster Upgrade test failover drainnode check node drain check node drain success kubectl get pod mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -o jsonpath='***.spec.nodeName***' get node name:ip-172-31-12-241.us-west-2.compute.internal success check if multiple pods are on the same node kubectl get pod mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -o jsonpath='***.spec.nodeName***' get node name:ip-172-31-4-125.us-west-2.compute.internal success kubectl get pod mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -o jsonpath='***.spec.nodeName***' get node name:ip-172-31-2-234.us-west-2.compute.internal success kubectl drain ip-172-31-12-241.us-west-2.compute.internal --delete-emptydir-data --ignore-daemonsets --force --grace-period 0 --timeout 60s node/ip-172-31-12-241.us-west-2.compute.internal cordoned Warning: ignoring DaemonSet-managed Pods: chaos-mesh/chaos-daemon-9f6w6, kube-system/aws-node-z6pvp, kube-system/ebs-csi-node-hzrt2, kube-system/kube-proxy-kx6hz evicting pod ns-vseqa/mysql-ctoiub-mysql-0 evicting pod kb-edsjw/kbcli-test-minio-7655786fc8-m5p6w evicting pod ns-jcrws/mongodb-mvdokf-mongodb-0 pod/mysql-ctoiub-mysql-0 evicted pod/mongodb-mvdokf-mongodb-0 evicted pod/kbcli-test-minio-7655786fc8-m5p6w evicted node/ip-172-31-12-241.us-west-2.compute.internal drained kubectl uncordon ip-172-31-12-241.us-west-2.compute.internal node/ip-172-31-12-241.us-west-2.compute.internal uncordoned check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-9-33.us-west-2.compute.internal/172.31.9.33 May 27,2025 18:43 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-4-125.us-west-2.compute.internal/172.31.4.125 May 27,2025 18:36 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 100m / 100m 512Mi / 512Mi data:3Gi ip-172-31-2-234.us-west-2.compute.internal/172.31.2.234 May 27,2025 18:36 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-1;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash` check cluster connect done check failover pod name failover pod name:mongodb-mvdokf-mongodb-1 failover drainnode Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 683597ef8a51913b0c5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:36:58.564+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:36:58.565+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835981c528a8e28ce5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T10:37:27.630+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:37:27.630+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check db_client batch data Success check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster vscale mongodb-mvdokf --auto-approve --force=true --components mongodb --cpu 200m --memory 0.6Gi --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-verticalscaling-srhgr created successfully, you can view the progress: kbcli cluster describe-ops mongodb-mvdokf-verticalscaling-srhgr -n ns-jcrws check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-verticalscaling-srhgr ns-jcrws VerticalScaling mongodb-mvdokf mongodb Running 0/3 May 27,2025 18:48 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-9-33.us-west-2.compute.internal/172.31.9.33 May 27,2025 18:48 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-10-29.us-west-2.compute.internal/172.31.10.29 May 27,2025 18:49 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-2-234.us-west-2.compute.internal/172.31.2.234 May 27,2025 18:48 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-verticalscaling-srhgr ns-jcrws VerticalScaling mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 18:48 UTC+0800 check ops status done ops_status:mongodb-mvdokf-verticalscaling-srhgr ns-jcrws VerticalScaling mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 18:48 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-verticalscaling-srhgr --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-verticalscaling-srhgr patched `kbcli cluster delete-ops --name mongodb-mvdokf-verticalscaling-srhgr --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-verticalscaling-srhgr deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835990932c233c1d05e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:49:14.073+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:49:14.073+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835991ee972a7546e5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T10:50:02.553+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:50:02.553+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `mongosh ***mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local` Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file exec return msg:Current Mongosh Log ID: 6835998380d8559ab15e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local/?directConnection=true&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:49:14.073+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:49:14.073+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] test> connect headlessEndpoints Success cluster hscale offline instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-hscaleoffinstance- labels: app.kubernetes.io/instance: mongodb-mvdokf app.kubernetes.io/managed-by: kubeblocks namespace: ns-jcrws spec: type: HorizontalScaling clusterName: mongodb-mvdokf force: true horizontalScaling: - componentName: mongodb scaleIn: onlineInstancesToOffline: - mongodb-mvdokf-mongodb-1 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-hscaleoffinstance-cfddb created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-hscaleoffinstance-cfddb ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Running 0/1 May 27,2025 18:53 UTC+0800 ops_status:mongodb-mvdokf-hscaleoffinstance-cfddb ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 18:53 UTC+0800 ops hscaleoffinstance Succeed or Failed Soon check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-9-33.us-west-2.compute.internal/172.31.9.33 May 27,2025 18:48 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-2-234.us-west-2.compute.internal/172.31.2.234 May 27,2025 18:48 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-hscaleoffinstance-cfddb ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 18:53 UTC+0800 check ops status done ops_status:mongodb-mvdokf-hscaleoffinstance-cfddb ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 18:53 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-hscaleoffinstance-cfddb --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-hscaleoffinstance-cfddb patched `kbcli cluster delete-ops --name mongodb-mvdokf-hscaleoffinstance-cfddb --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-hscaleoffinstance-cfddb deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 683599c4b51e05f5fb5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:49:14.073+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:49:14.073+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 683599d62c8afaed0b5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T10:48:39.850+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:48:39.850+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success cluster hscale online instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-hscaleoninstance- labels: app.kubernetes.io/instance: mongodb-mvdokf app.kubernetes.io/managed-by: kubeblocks namespace: ns-jcrws spec: type: HorizontalScaling clusterName: mongodb-mvdokf force: true horizontalScaling: - componentName: mongodb scaleOut: offlineInstancesToOnline: - mongodb-mvdokf-mongodb-1 check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-hscaleoninstance-8qqjz created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-hscaleoninstance-8qqjz ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Running 0/1 May 27,2025 18:54 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-9-33.us-west-2.compute.internal/172.31.9.33 May 27,2025 18:48 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 18:54 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-2-234.us-west-2.compute.internal/172.31.2.234 May 27,2025 18:48 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-hscaleoninstance-8qqjz ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 18:54 UTC+0800 check ops status done ops_status:mongodb-mvdokf-hscaleoninstance-8qqjz ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 18:54 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-hscaleoninstance-8qqjz --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-hscaleoninstance-8qqjz patched `kbcli cluster delete-ops --name mongodb-mvdokf-hscaleoninstance-8qqjz --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-hscaleoninstance-8qqjz deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359a42119ef3f3605e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:49:14.073+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:49:14.073+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359a565fec8278c05e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T10:55:11.580+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:55:11.581+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success cluster stop check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster stop mongodb-mvdokf --auto-approve --force=true --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-stop-h4k2p created successfully, you can view the progress: kbcli cluster describe-ops mongodb-mvdokf-stop-h4k2p -n ns-jcrws check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-stop-h4k2p ns-jcrws Stop mongodb-mvdokf mongodb Running 0/3 May 27,2025 18:56 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Stopping May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping cluster_status:Stopping check cluster status done cluster_status:Stopped check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME check pod status done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-stop-h4k2p ns-jcrws Stop mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 18:56 UTC+0800 check ops status done ops_status:mongodb-mvdokf-stop-h4k2p ns-jcrws Stop mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 18:56 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-stop-h4k2p --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-stop-h4k2p patched `kbcli cluster delete-ops --name mongodb-mvdokf-stop-h4k2p --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-stop-h4k2p deleted cluster start check cluster status before ops check cluster status done cluster_status:Stopped `kbcli cluster start mongodb-mvdokf --force=true --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-start-dp5pv created successfully, you can view the progress: kbcli cluster describe-ops mongodb-mvdokf-start-dp5pv -n ns-jcrws check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-start-dp5pv ns-jcrws Start mongodb-mvdokf mongodb Running 0/3 May 27,2025 18:57 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 18:57 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 18:58 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 18:58 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-start-dp5pv ns-jcrws Start mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 18:57 UTC+0800 check ops status done ops_status:mongodb-mvdokf-start-dp5pv ns-jcrws Start mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 18:57 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-start-dp5pv --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-start-dp5pv patched `kbcli cluster delete-ops --name mongodb-mvdokf-start-dp5pv --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-start-dp5pv deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359b2ebe66dfb9665e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:58:14.877+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:58:14.877+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359b42b73ec37c3e5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T10:58:44.850+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:58:44.850+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success cluster mongodb scale-out check cluster status before ops check cluster status done cluster_status:Running No resources found in mongodb-mvdokf namespace. `kbcli cluster scale-out mongodb-mvdokf --auto-approve --force=true --components mongodb --replicas 1 --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-horizontalscaling-s79ml created successfully, you can view the progress: kbcli cluster describe-ops mongodb-mvdokf-horizontalscaling-s79ml -n ns-jcrws check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-horizontalscaling-s79ml ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:00 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 18:57 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 18:58 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 18:58 UTC+0800 mongodb-mvdokf-mongodb-3 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-10-29.us-west-2.compute.internal/172.31.10.29 May 27,2025 19:00 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 mongodb-mvdokf-mongodb-3 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done No resources found in mongodb-mvdokf namespace. check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-horizontalscaling-s79ml ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 19:00 UTC+0800 check ops status done ops_status:mongodb-mvdokf-horizontalscaling-s79ml ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 19:00 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-horizontalscaling-s79ml --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-horizontalscaling-s79ml patched `kbcli cluster delete-ops --name mongodb-mvdokf-horizontalscaling-s79ml --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-horizontalscaling-s79ml deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359baae5f7acb1055e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:58:14.877+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:58:14.877+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359bbf1ac882e80a5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:58:44.850+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:58:44.850+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success cluster mongodb scale-in check cluster status before ops check cluster status done cluster_status:Running No resources found in mongodb-mvdokf namespace. `kbcli cluster scale-in mongodb-mvdokf --auto-approve --force=true --components mongodb --replicas 1 --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-horizontalscaling-tvnf9 created successfully, you can view the progress: kbcli cluster describe-ops mongodb-mvdokf-horizontalscaling-tvnf9 -n ns-jcrws check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-horizontalscaling-tvnf9 ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:02 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 18:57 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 18:58 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 18:58 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done No resources found in mongodb-mvdokf namespace. check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-horizontalscaling-tvnf9 ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 19:02 UTC+0800 check ops status done ops_status:mongodb-mvdokf-horizontalscaling-tvnf9 ns-jcrws HorizontalScaling mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 19:02 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-horizontalscaling-tvnf9 --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-horizontalscaling-tvnf9 patched `kbcli cluster delete-ops --name mongodb-mvdokf-horizontalscaling-tvnf9 --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-horizontalscaling-tvnf9 deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359c1d28ea55d04d5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:58:14.877+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:58:14.877+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359c322789fe6f745e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T10:59:10.266+00:00: You are running this process as the root user, which is not recommended 2025-05-27T10:59:10.266+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success cluster restart check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster restart mongodb-mvdokf --auto-approve --force=true --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-restart-bjvwr created successfully, you can view the progress: kbcli cluster describe-ops mongodb-mvdokf-restart-bjvwr -n ns-jcrws check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-restart-bjvwr ns-jcrws Restart mongodb-mvdokf mongodb Running 0/3 May 27,2025 19:04 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-1;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-restart-bjvwr ns-jcrws Restart mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 19:04 UTC+0800 check ops status done ops_status:mongodb-mvdokf-restart-bjvwr ns-jcrws Restart mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 19:04 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-restart-bjvwr --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-restart-bjvwr patched `kbcli cluster delete-ops --name mongodb-mvdokf-restart-bjvwr --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-restart-bjvwr deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359cec4416ed01b45e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:04.244+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:04.244+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359d0002645134c95e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover dnsrandom check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-mongodb-mvdokf" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnsrandom-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-1 mode: all action: random duration: 2m `kubectl apply -f test-chaos-mesh-dnsrandom-mongodb-mvdokf.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnsrandom-mongodb-mvdokf created apply test-chaos-mesh-dnsrandom-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-dnsrandom-mongodb-mvdokf.yaml` dnsrandom chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-1;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnsrandom-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-mongodb-mvdokf" force deleted Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnsrandom-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-1 failover dnsrandom Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359dc7b71dec82c35e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:04.244+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:04.244+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359de05e4fd6cab15e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover networkbandwidthover check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-mongodb-mvdokf" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkbandwidthover-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-1 action: bandwidth mode: all bandwidth: rate: '1bps' limit: 20971520 buffer: 10000 duration: 2m `kubectl apply -f test-chaos-mesh-networkbandwidthover-mongodb-mvdokf.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkbandwidthover-mongodb-mvdokf created apply test-chaos-mesh-networkbandwidthover-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-networkbandwidthover-mongodb-mvdokf.yaml` networkbandwidthover chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:3Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkbandwidthover-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-mongodb-mvdokf" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkbandwidthover-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover networkbandwidthover Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359ea5176b4ef9bd5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359eba8ed7cc946e5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success `kubectl get pvc -l app.kubernetes.io/instance=mongodb-mvdokf,apps.kubeblocks.io/component-name=mongodb,apps.kubeblocks.io/vct-name=data --namespace ns-jcrws ` cluster volume-expand check cluster status before ops check cluster status done cluster_status:Running No resources found in mongodb-mvdokf namespace. `kbcli cluster volume-expand mongodb-mvdokf --auto-approve --force=true --components mongodb --volume-claim-templates data --storage 6Gi --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-volumeexpansion-knkxx created successfully, you can view the progress: kbcli cluster describe-ops mongodb-mvdokf-volumeexpansion-knkxx -n ns-jcrws check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-volumeexpansion-knkxx ns-jcrws VolumeExpansion mongodb-mvdokf mongodb Running 0/3 May 27,2025 19:15 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done No resources found in mongodb-mvdokf namespace. check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-volumeexpansion-knkxx ns-jcrws VolumeExpansion mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 19:15 UTC+0800 check ops status done ops_status:mongodb-mvdokf-volumeexpansion-knkxx ns-jcrws VolumeExpansion mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 19:15 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-volumeexpansion-knkxx --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-volumeexpansion-knkxx patched `kbcli cluster delete-ops --name mongodb-mvdokf-volumeexpansion-knkxx --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-volumeexpansion-knkxx deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359f469e234107795e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 68359f5cbdcd5e2b455e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:04.244+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:04.244+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover fullcpu check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpu-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-mongodb-mvdokf" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-fullcpu-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-0 mode: all stressors: cpu: workers: 100 load: 100 duration: 2m `kubectl apply -f test-chaos-mesh-fullcpu-mongodb-mvdokf.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-fullcpu-mongodb-mvdokf created apply test-chaos-mesh-fullcpu-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-fullcpu-mongodb-mvdokf.yaml` fullcpu chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-fullcpu-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-mongodb-mvdokf" force deleted Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-fullcpu-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover fullcpu Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a023f2638acff75e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a03709d3bc6df05e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover dnserror check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-mongodb-mvdokf" not found Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: DNSChaos metadata: name: test-chaos-mesh-dnserror-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-0 mode: all action: error duration: 2m `kubectl apply -f test-chaos-mesh-dnserror-mongodb-mvdokf.yaml` dnschaos.chaos-mesh.org/test-chaos-mesh-dnserror-mongodb-mvdokf created apply test-chaos-mesh-dnserror-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-dnserror-mongodb-mvdokf.yaml` dnserror chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge DNSChaos test-chaos-mesh-dnserror-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-mongodb-mvdokf" force deleted Error from server (NotFound): dnschaos.chaos-mesh.org "test-chaos-mesh-dnserror-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover dnserror Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a0fd91c1f297765e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a11234862d03345e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover timeoffset check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-mongodb-mvdokf" not found Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: TimeChaos metadata: name: test-chaos-mesh-timeoffset-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-0 mode: all timeOffset: '-10m' clockIds: - CLOCK_REALTIME duration: 2m `kubectl apply -f test-chaos-mesh-timeoffset-mongodb-mvdokf.yaml` timechaos.chaos-mesh.org/test-chaos-mesh-timeoffset-mongodb-mvdokf created apply test-chaos-mesh-timeoffset-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-timeoffset-mongodb-mvdokf.yaml` timeoffset chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge TimeChaos test-chaos-mesh-timeoffset-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-mongodb-mvdokf" force deleted Error from server (NotFound): timechaos.chaos-mesh.org "test-chaos-mesh-timeoffset-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover timeoffset Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a1d82bcc1c16fd5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a1ede7f5420dcd5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover networkcorruptover check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-mongodb-mvdokf" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkcorruptover-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-0 mode: all action: corrupt corrupt: corrupt: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkcorruptover-mongodb-mvdokf.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkcorruptover-mongodb-mvdokf created apply test-chaos-mesh-networkcorruptover-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-networkcorruptover-mongodb-mvdokf.yaml` networkcorruptover chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Failed May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-2;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-1 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkcorruptover-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-mongodb-mvdokf" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkcorruptover-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-2 failover networkcorruptover Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a2b5133fde2d9a5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a2cb211ccfe9da5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check db_client batch data Success cluster does not need to check monitor currently check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-2;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-1 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash` check cluster connect done test failover networklossover check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-mongodb-mvdokf" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networklossover-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-2 mode: all action: loss loss: loss: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networklossover-mongodb-mvdokf.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networklossover-mongodb-mvdokf created apply test-chaos-mesh-networklossover-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-networklossover-mongodb-mvdokf.yaml` networklossover chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-1;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networklossover-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-mongodb-mvdokf" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networklossover-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-1 failover networklossover Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a3b575ec6eb50e5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:04.244+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:04.244+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a3cbc8b623f1035e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover kill1 check node drain check node drain success `kill 1` Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file exec return message: check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-1;secondary: mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover kill1 Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a42038cd28fa5f5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a434e1d526e0845e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover networkpartition check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-mongodb-mvdokf" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkpartition-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-0 action: partition mode: all target: mode: all selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-1 direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkpartition-mongodb-mvdokf.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkpartition-mongodb-mvdokf created apply test-chaos-mesh-networkpartition-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-networkpartition-mongodb-mvdokf.yaml` networkpartition chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkpartition-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-mongodb-mvdokf" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkpartition-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover networkpartition Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a522d30cf80a465e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a53694a610e5b85e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover networkdelay check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelay-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-mongodb-mvdokf" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkdelay-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-0 mode: all action: delay delay: latency: 2000ms correlation: '100' jitter: 0ms direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkdelay-mongodb-mvdokf.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkdelay-mongodb-mvdokf created apply test-chaos-mesh-networkdelay-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-networkdelay-mongodb-mvdokf.yaml` networkdelay chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkdelay-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-mongodb-mvdokf" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkdelay-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover networkdelay Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a609f0650494715e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a61e89c7dd42825e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test switchover cluster promote check cluster status before ops check cluster status done cluster_status:Running `kbcli cluster promote mongodb-mvdokf --auto-approve --force=true --instance mongodb-mvdokf-mongodb-0 --candidate mongodb-mvdokf-mongodb-1 --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-switchover-d6487 created successfully, you can view the progress: kbcli cluster describe-ops mongodb-mvdokf-switchover-d6487 -n ns-jcrws check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-switchover-d6487 ns-jcrws Switchover mongodb-mvdokf mongodb-mvdokf-mongodb Running 0/1 May 27,2025 19:47 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-1;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-switchover-d6487 ns-jcrws Switchover mongodb-mvdokf mongodb-mvdokf-mongodb Succeed 1/1 May 27,2025 19:47 UTC+0800 check ops status done ops_status:mongodb-mvdokf-switchover-d6487 ns-jcrws Switchover mongodb-mvdokf mongodb-mvdokf-mongodb Succeed 1/1 May 27,2025 19:47 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-switchover-d6487 --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-switchover-d6487 patched `kbcli cluster delete-ops --name mongodb-mvdokf-switchover-d6487 --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-switchover-d6487 deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a67eba95c34d355e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:37:43.544+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:37:43.545+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a69340eabb58ba5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check db_client batch data Success switchover pod:mongodb-mvdokf-mongodb-1 switchover success test failover oom check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-mongodb-mvdokf" not found Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: StressChaos metadata: name: test-chaos-mesh-oom-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-1 mode: all stressors: memory: workers: 1 size: "100GB" oomScoreAdj: -1000 duration: 2m `kubectl apply -f test-chaos-mesh-oom-mongodb-mvdokf.yaml` stresschaos.chaos-mesh.org/test-chaos-mesh-oom-mongodb-mvdokf created apply test-chaos-mesh-oom-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-oom-mongodb-mvdokf.yaml` check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:05 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge StressChaos test-chaos-mesh-oom-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. stresschaos.chaos-mesh.org "test-chaos-mesh-oom-mongodb-mvdokf" force deleted Error from server (NotFound): stresschaos.chaos-mesh.org "test-chaos-mesh-oom-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover oom Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a714fc916b0cac5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a729b7dcf16b225e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T11:50:00.936+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:50:00.937+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success cluster rebulid instances apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-rebuildinstance- namespace: ns-jcrws spec: type: RebuildInstance clusterName: mongodb-mvdokf force: true rebuildFrom: - componentName: mongodb instances: - name: mongodb-mvdokf-mongodb-1 inPlace: true check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-rebuildinstance-b8jg2 created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Running 0/1 May 27,2025 19:51 UTC+0800 check ops status done ops_status:mongodb-mvdokf-rebuildinstance-b8jg2 ns-jcrws RebuildInstance mongodb-mvdokf mongodb Succeed 1/1 May 27,2025 19:51 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-rebuildinstance-b8jg2 --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-rebuildinstance-b8jg2 patched `kbcli cluster delete-ops --name mongodb-mvdokf-rebuildinstance-b8jg2 --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-rebuildinstance-b8jg2 deleted check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:06 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a7b40a9af9c9c05e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:06:35.393+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:06:35.393+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a7ca2034a61e515e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T11:52:19.065+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:52:19.066+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover delete pod:mongodb-mvdokf-mongodb-0 `kubectl delete pod mongodb-mvdokf-mongodb-0 --namespace ns-jcrws ` pod "mongodb-mvdokf-mongodb-0" deleted check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-2;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-1 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash` check cluster connect done check failover pod name failover pod name:mongodb-mvdokf-mongodb-2 failover Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a8351c46ebd7455e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:05:28.866+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:05:28.866+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a84df601fc14c25e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T11:54:50.775+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:54:50.775+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover podfailure check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-mongodb-mvdokf" not found Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: PodChaos metadata: name: test-chaos-mesh-podfailure-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-2 mode: all action: pod-failure duration: 2m `kubectl apply -f test-chaos-mesh-podfailure-mongodb-mvdokf.yaml` podchaos.chaos-mesh.org/test-chaos-mesh-podfailure-mongodb-mvdokf created apply test-chaos-mesh-podfailure-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-podfailure-mongodb-mvdokf.yaml` podfailure chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge PodChaos test-chaos-mesh-podfailure-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-mongodb-mvdokf" force deleted Error from server (NotFound): podchaos.chaos-mesh.org "test-chaos-mesh-podfailure-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover podfailure Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a9198a40b105cd5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:54:50.775+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:54:50.775+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a931babd48897d5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:58:41.578+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:58:41.578+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success test failover networkduplicate check node drain check node drain success `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-mongodb-mvdokf" not found Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-mongodb-mvdokf" not found apiVersion: chaos-mesh.org/v1alpha1 kind: NetworkChaos metadata: name: test-chaos-mesh-networkduplicate-mongodb-mvdokf namespace: ns-jcrws spec: selector: namespaces: - ns-jcrws labelSelectors: apps.kubeblocks.io/pod-name: mongodb-mvdokf-mongodb-0 mode: all action: duplicate duplicate: duplicate: '100' correlation: '100' direction: to duration: 2m `kubectl apply -f test-chaos-mesh-networkduplicate-mongodb-mvdokf.yaml` networkchaos.chaos-mesh.org/test-chaos-mesh-networkduplicate-mongodb-mvdokf created apply test-chaos-mesh-networkduplicate-mongodb-mvdokf.yaml Success `rm -rf test-chaos-mesh-networkduplicate-mongodb-mvdokf.yaml` networkduplicate chaos test waiting 120 seconds check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge NetworkChaos test-chaos-mesh-networkduplicate-mongodb-mvdokf --namespace ns-jcrws ` Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-mongodb-mvdokf" force deleted Error from server (NotFound): networkchaos.chaos-mesh.org "test-chaos-mesh-networkduplicate-mongodb-mvdokf" not found check failover pod name failover pod name:mongodb-mvdokf-mongodb-0 failover networkduplicate Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835a9fa32c17d5bb15e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:54:50.775+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:54:50.775+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835aa106e4167843c5e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T11:58:41.578+00:00: You are running this process as the root user, which is not recommended 2025-05-27T11:58:41.578+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success cmpv upgrade service version:3,4.0.28|3,4.2.24|3,4.4.29|3,5.0.28|3,5.0.30|3,6.0.16|3,6.0.20|3,6.0.22|3,7.0.12|3,7.0.16|3,7.0.19|3,8.0.4|3,8.0.6|3,8.0.8 set latest cmpv service version latest service version:7.0.19 cmpv service version upgrade and downgrade upgrade from:7.0.12 to service version:7.0.19 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-upgrade-cmpv- namespace: ns-jcrws spec: clusterName: mongodb-mvdokf upgrade: components: - componentName: mongodb serviceVersion: 7.0.19 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-4lpj6 created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-4lpj6 ns-jcrws Upgrade mongodb-mvdokf mongodb Running 0/3 May 27,2025 20:04 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-1 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-4lpj6 ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:04 UTC+0800 check ops status done ops_status:mongodb-mvdokf-upgrade-cmpv-4lpj6 ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:04 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-upgrade-cmpv-4lpj6 --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-4lpj6 patched `kbcli cluster delete-ops --name mongodb-mvdokf-upgrade-cmpv-4lpj6 --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-upgrade-cmpv-4lpj6 deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835aab4f7bb2d8c2ed861df Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.0 Using MongoDB: 7.0.19 Using Mongosh: 2.5.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:04:39.082+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:04:39.082+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835aaca38173c87c3d861df Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.0 Using MongoDB: 7.0.19 Using Mongosh: 2.5.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T12:05:31.486+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:05:31.487+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check db_client batch data Success downgrade from:7.0.19 to service version:7.0.12 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-upgrade-cmpv- namespace: ns-jcrws spec: clusterName: mongodb-mvdokf upgrade: components: - componentName: mongodb serviceVersion: 7.0.12 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-5m9hm created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-5m9hm ns-jcrws Upgrade mongodb-mvdokf mongodb Running 0/3 May 27,2025 20:07 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-1;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-5m9hm ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:07 UTC+0800 check ops status done ops_status:mongodb-mvdokf-upgrade-cmpv-5m9hm ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:07 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-upgrade-cmpv-5m9hm --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-5m9hm patched `kbcli cluster delete-ops --name mongodb-mvdokf-upgrade-cmpv-5m9hm --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-upgrade-cmpv-5m9hm deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835ab50df0550d1455e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:07:33.860+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:07:33.860+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835ab6795501ca6055e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T12:07:49.581+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:07:49.582+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check db_client batch data Success upgrade from:7.0.12 to service version:7.0.16 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-upgrade-cmpv- namespace: ns-jcrws spec: clusterName: mongodb-mvdokf upgrade: components: - componentName: mongodb serviceVersion: 7.0.16 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-q9r9k created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-q9r9k ns-jcrws Upgrade mongodb-mvdokf mongodb Running 0/3 May 27,2025 20:09 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-2;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-1 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-q9r9k ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:09 UTC+0800 check ops status done ops_status:mongodb-mvdokf-upgrade-cmpv-q9r9k ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:09 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-upgrade-cmpv-q9r9k --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-q9r9k patched `kbcli cluster delete-ops --name mongodb-mvdokf-upgrade-cmpv-q9r9k --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-upgrade-cmpv-q9r9k deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835ac0bb4559c1fc9567a2a Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.7 Using MongoDB: 7.0.16 Using Mongosh: 2.3.7 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:10:20.771+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:10:20.771+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835ac21140cb9a683567a2a Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.7 Using MongoDB: 7.0.16 Using Mongosh: 2.3.7 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T12:10:45.473+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:10:45.474+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check db_client batch data Success upgrade from:7.0.16 to service version:7.0.19 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-upgrade-cmpv- namespace: ns-jcrws spec: clusterName: mongodb-mvdokf upgrade: components: - componentName: mongodb serviceVersion: 7.0.19 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-wqvbm created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-wqvbm ns-jcrws Upgrade mongodb-mvdokf mongodb Running 0/3 May 27,2025 20:12 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-1;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-wqvbm ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:12 UTC+0800 check ops status done ops_status:mongodb-mvdokf-upgrade-cmpv-wqvbm ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:12 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-upgrade-cmpv-wqvbm --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-wqvbm patched `kbcli cluster delete-ops --name mongodb-mvdokf-upgrade-cmpv-wqvbm --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-upgrade-cmpv-wqvbm deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835aca777f6923ec1d861df Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.0 Using MongoDB: 7.0.19 Using Mongosh: 2.5.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:13:16.352+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:13:16.352+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835acbf53ece6bd3cd861df Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.5.0 Using MongoDB: 7.0.19 Using Mongosh: 2.5.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T12:13:48.967+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:13:48.968+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check db_client batch data Success downgrade from:7.0.19 to service version:7.0.16 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-upgrade-cmpv- namespace: ns-jcrws spec: clusterName: mongodb-mvdokf upgrade: components: - componentName: mongodb serviceVersion: 7.0.16 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-44rl7 created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-44rl7 ns-jcrws Upgrade mongodb-mvdokf mongodb Running 0/3 May 27,2025 20:15 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-2;secondary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-1 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-44rl7 ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:15 UTC+0800 check ops status done ops_status:mongodb-mvdokf-upgrade-cmpv-44rl7 ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:15 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-upgrade-cmpv-44rl7 --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-44rl7 patched `kbcli cluster delete-ops --name mongodb-mvdokf-upgrade-cmpv-44rl7 --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-upgrade-cmpv-44rl7 deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835ad48d32b0f3653567a2a Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.7 Using MongoDB: 7.0.16 Using Mongosh: 2.3.7 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:15:56.285+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:15:56.285+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835ad5ee5156ba26d567a2a Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.7 Using MongoDB: 7.0.16 Using Mongosh: 2.3.7 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T12:16:30.237+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:16:30.238+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-2 --namespace ns-jcrws -- bash ` check db_client batch data Success cmpv service version downgrade downgrade from:7.0.16 to service version:7.0.12 cluster upgrade apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: generateName: mongodb-mvdokf-upgrade-cmpv- namespace: ns-jcrws spec: clusterName: mongodb-mvdokf upgrade: components: - componentName: mongodb serviceVersion: 7.0.12 type: Upgrade check cluster status before ops check cluster status done cluster_status:Running `kubectl create -f test_ops_cluster_mongodb-mvdokf.yaml` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-kfzt7 created create test_ops_cluster_mongodb-mvdokf.yaml Success `rm -rf test_ops_cluster_mongodb-mvdokf.yaml` check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-kfzt7 ns-jcrws Upgrade mongodb-mvdokf mongodb Running 0/3 May 27,2025 20:18 UTC+0800 check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb DoNotTerminate Updating May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating cluster_status:Updating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0 mongodb-mvdokf-mongodb-2;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done check ops status `kbcli cluster list-ops mongodb-mvdokf --status all --namespace ns-jcrws ` NAME NAMESPACE TYPE CLUSTER COMPONENT STATUS PROGRESS CREATED-TIME mongodb-mvdokf-upgrade-cmpv-kfzt7 ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:18 UTC+0800 check ops status done ops_status:mongodb-mvdokf-upgrade-cmpv-kfzt7 ns-jcrws Upgrade mongodb-mvdokf mongodb Succeed 3/3 May 27,2025 20:18 UTC+0800 `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge opsrequests.operations mongodb-mvdokf-upgrade-cmpv-kfzt7 --namespace ns-jcrws ` opsrequest.operations.kubeblocks.io/mongodb-mvdokf-upgrade-cmpv-kfzt7 patched `kbcli cluster delete-ops --name mongodb-mvdokf-upgrade-cmpv-kfzt7 --force --auto-approve --namespace ns-jcrws ` OpsRequest mongodb-mvdokf-upgrade-cmpv-kfzt7 deleted `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find()\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835ade55304ddc6c05e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:18:51.679+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:18:51.680+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: primary] admin> check cluster data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo "echo \"db.col.find().readPref('secondary')\" | mongosh --host mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-1 --namespace ns-jcrws -- bash ` check readonly data: Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835adfcb42f8b7fc15e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb-ro.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy). You can opt-out by running the disableTelemetry() command. ------ The server generated these startup warnings when booting 2025-05-27T12:19:08.587+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:19:08.587+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: secondary] admin> [ *** _id: ObjectId('6835972a21db4cde605e739c'), a: 'sxxpu' *** ] mongodb-mvdokf-mongodb [direct: secondary] admin> check cluster readonly data consistent Success `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check db_client batch data count `echo "echo \"db.executions_loop_table.estimatedDocumentCount();\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin" | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash ` check db_client batch data Success cluster update terminationPolicy WipeOut `kbcli cluster update mongodb-mvdokf --termination-policy=WipeOut --namespace ns-jcrws ` cluster.apps.kubeblocks.io/mongodb-mvdokf updated check cluster status `kbcli cluster list mongodb-mvdokf --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf ns-jcrws mongodb WipeOut Running May 27,2025 18:35 UTC+0800 app.kubernetes.io/instance=mongodb-mvdokf,clusterdefinition.kubeblocks.io/name=mongodb check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-mongodb-0 ns-jcrws mongodb-mvdokf mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 19:54 UTC+0800 mongodb-mvdokf-mongodb-1 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 19:51 UTC+0800 mongodb-mvdokf-mongodb-2 ns-jcrws mongodb-mvdokf mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-15-145.us-west-2.compute.internal/172.31.15.145 May 27,2025 19:05 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-mongodb-0;secondary: mongodb-mvdokf-mongodb-1 mongodb-mvdokf-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) `db.msg.drop();db.createCollection('msg');db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf0',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf1',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf2',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf3',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf4',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf5',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf6',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf7',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf8',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf9',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf10',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf11',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf12',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf13',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf14',time: new Date()***);db.msg.insertOne(***msg: 'kbcli-test-data-mvdokf15',time: new Date()***);` Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835ae4b5b8060b0195e739b Connecting to: mongodb://@mongodb-mvdokf-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:18:51.679+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:18:51.680+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-mongodb [direct: primary] admin> *** acknowledged: true, insertedId: ObjectId('6835ae5a5b8060b0195e73ab') *** mongodb-mvdokf-mongodb [direct: primary] admin> cluster dump backup `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.name***"` `kubectl get backuprepo backuprepo-kbcli-test -o jsonpath="***.spec.credential.namespace***"` `kubectl get secrets kb-backuprepo-4dp9j -n kb-edsjw -o jsonpath="***.data.accessKeyId***"` `kubectl get secrets kb-backuprepo-4dp9j -n kb-edsjw -o jsonpath="***.data.secretAccessKey***"` KUBEBLOCKS NAMESPACE:kb-edsjw get kubeblocks namespace done `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-edsjw -o jsonpath="***.items[0].data.root-user***"` `kubectl get secrets -l app.kubernetes.io/instance=kbcli-test-minio --namespace kb-edsjw -o jsonpath="***.items[0].data.root-password***"` minio_user:kbclitest,minio_password:kbclitest,minio_endpoint:kbcli-test-minio.kb-edsjw.svc.cluster.local:9000 list minio bucket kbcli-test `echo 'mc config host add minioserver http://kbcli-test-minio.kb-edsjw.svc.cluster.local:9000 kbclitest kbclitest;mc ls minioserver' | kubectl exec -it kbcli-test-minio-7655786fc8-cmdg8 --namespace kb-edsjw -- bash` Unable to use a TTY - input is not a terminal or the right kind of file list minio bucket done default backuprepo:backuprepo-kbcli-test exists `kbcli cluster backup mongodb-mvdokf --method dump --namespace ns-jcrws ` Backup backup-ns-jcrws-mongodb-mvdokf-20250527202201 created successfully, you can view the progress: kbcli cluster list-backups --name=backup-ns-jcrws-mongodb-mvdokf-20250527202201 -n ns-jcrws check backup status `kbcli cluster list-backups mongodb-mvdokf --namespace ns-jcrws ` NAME NAMESPACE SOURCE-CLUSTER METHOD STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATE-TIME COMPLETION-TIME EXPIRATION backup-ns-jcrws-mongodb-mvdokf-20250527202201 ns-jcrws mongodb-mvdokf dump Running Delete May 27,2025 20:22 UTC+0800 backup_status:mongodb-mvdokf-dump-Running backup_status:mongodb-mvdokf-dump-Running check backup status done backup_status:backup-ns-jcrws-mongodb-mvdokf-20250527202201 ns-jcrws mongodb-mvdokf dump Completed 3107 10s Delete May 27,2025 20:22 UTC+0800 May 27,2025 20:22 UTC+0800 cluster restore backup Error from server (NotFound): opsrequests.operations.kubeblocks.io "mongodb-mvdokf-backup" not found `kbcli cluster describe-backup --names backup-ns-jcrws-mongodb-mvdokf-20250527202201 --namespace ns-jcrws ` Name: backup-ns-jcrws-mongodb-mvdokf-20250527202201 Cluster: mongodb-mvdokf Namespace: ns-jcrws Spec: Method: dump Policy Name: mongodb-mvdokf-mongodb-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-jcrws-mongodb-mvdokf-20250527202201-017da TargetPodName: mongodb-mvdokf-mongodb-1 Phase: Completed Start Time: May 27,2025 20:22 UTC+0800 Completion Time: May 27,2025 20:22 UTC+0800 Status: Phase: Completed Total Size: 3107 ActionSet Name: mongodb-dump-br Repository: backuprepo-kbcli-test Duration: 10s Start Time: May 27,2025 20:22 UTC+0800 Completion Time: May 27,2025 20:22 UTC+0800 Path: /ns-jcrws/mongodb-mvdokf-1091860a-f4e7-4b46-88a3-39c794b47f34/mongodb/backup-ns-jcrws-mongodb-mvdokf-20250527202201 Time Range Start: May 27,2025 20:21 UTC+0800 Time Range End: May 27,2025 20:21 UTC+0800 Warning Events: `kbcli cluster restore mongodb-mvdokf-backup --backup backup-ns-jcrws-mongodb-mvdokf-20250527202201 --namespace ns-jcrws ` Cluster mongodb-mvdokf-backup created check cluster status `kbcli cluster list mongodb-mvdokf-backup --show-labels --namespace ns-jcrws ` NAME NAMESPACE CLUSTER-DEFINITION TERMINATION-POLICY STATUS CREATED-TIME LABELS mongodb-mvdokf-backup ns-jcrws mongodb WipeOut Creating May 27,2025 20:22 UTC+0800 clusterdefinition.kubeblocks.io/name=mongodb cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating cluster_status:Creating check cluster status done cluster_status:Running check pod status `kbcli cluster list-instances mongodb-mvdokf-backup --namespace ns-jcrws ` NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME mongodb-mvdokf-backup-mongodb-0 ns-jcrws mongodb-mvdokf-backup mongodb Running primary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-4-45.us-west-2.compute.internal/172.31.4.45 May 27,2025 20:22 UTC+0800 mongodb-mvdokf-backup-mongodb-1 ns-jcrws mongodb-mvdokf-backup mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-10-29.us-west-2.compute.internal/172.31.10.29 May 27,2025 20:22 UTC+0800 mongodb-mvdokf-backup-mongodb-2 ns-jcrws mongodb-mvdokf-backup mongodb Running secondary us-west-2a 200m / 200m 644245094400m / 644245094400m data:6Gi ip-172-31-12-88.us-west-2.compute.internal/172.31.12.88 May 27,2025 20:23 UTC+0800 check pod status done check cluster role check cluster role done primary: mongodb-mvdokf-backup-mongodb-0;secondary: mongodb-mvdokf-backup-mongodb-1 mongodb-mvdokf-backup-mongodb-2 `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf-backup` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. check cluster connect `echo " echo \"\" | mongosh --host mongodb-mvdokf-backup-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-backup-mongodb-0 --namespace ns-jcrws -- bash` check cluster connect done `kbcli cluster describe-backup --names backup-ns-jcrws-mongodb-mvdokf-20250527202201 --namespace ns-jcrws ` Name: backup-ns-jcrws-mongodb-mvdokf-20250527202201 Cluster: mongodb-mvdokf Namespace: ns-jcrws Spec: Method: dump Policy Name: mongodb-mvdokf-mongodb-backup-policy Actions: dp-backup-0: ActionType: Job WorkloadName: dp-backup-0-backup-ns-jcrws-mongodb-mvdokf-20250527202201-017da TargetPodName: mongodb-mvdokf-mongodb-1 Phase: Completed Start Time: May 27,2025 20:22 UTC+0800 Completion Time: May 27,2025 20:22 UTC+0800 Status: Phase: Completed Total Size: 3107 ActionSet Name: mongodb-dump-br Repository: backuprepo-kbcli-test Duration: 10s Start Time: May 27,2025 20:22 UTC+0800 Completion Time: May 27,2025 20:22 UTC+0800 Path: /ns-jcrws/mongodb-mvdokf-1091860a-f4e7-4b46-88a3-39c794b47f34/mongodb/backup-ns-jcrws-mongodb-mvdokf-20250527202201 Time Range Start: May 27,2025 20:21 UTC+0800 Time Range End: May 27,2025 20:21 UTC+0800 Warning Events: `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf-backup` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `db.msg.find();` Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835aeee30991e7d945e739b Connecting to: mongodb://@mongodb-mvdokf-backup-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:22:44.386+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:22:44.386+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-backup-mongodb [direct: primary] admin> [ *** _id: ObjectId('6835ae585b8060b0195e739c'), msg: 'kbcli-test-data-mvdokf0', time: ISODate('2025-05-27T12:21:44.383Z') ***, *** _id: ObjectId('6835ae585b8060b0195e739d'), msg: 'kbcli-test-data-mvdokf1', time: ISODate('2025-05-27T12:21:44.586Z') ***, *** _id: ObjectId('6835ae585b8060b0195e739e'), msg: 'kbcli-test-data-mvdokf2', time: ISODate('2025-05-27T12:21:44.978Z') ***, *** _id: ObjectId('6835ae595b8060b0195e739f'), msg: 'kbcli-test-data-mvdokf3', time: ISODate('2025-05-27T12:21:45.275Z') ***, *** _id: ObjectId('6835ae595b8060b0195e73a0'), msg: 'kbcli-test-data-mvdokf4', time: ISODate('2025-05-27T12:21:45.475Z') ***, *** _id: ObjectId('6835ae595b8060b0195e73a1'), msg: 'kbcli-test-data-mvdokf5', time: ISODate('2025-05-27T12:21:45.578Z') ***, *** _id: ObjectId('6835ae595b8060b0195e73a2'), msg: 'kbcli-test-data-mvdokf6', time: ISODate('2025-05-27T12:21:45.776Z') ***, *** _id: ObjectId('6835ae595b8060b0195e73a3'), msg: 'kbcli-test-data-mvdokf7', time: ISODate('2025-05-27T12:21:45.876Z') ***, *** _id: ObjectId('6835ae5a5b8060b0195e73a4'), msg: 'kbcli-test-data-mvdokf8', time: ISODate('2025-05-27T12:21:46.076Z') ***, *** _id: ObjectId('6835ae5a5b8060b0195e73a5'), msg: 'kbcli-test-data-mvdokf9', time: ISODate('2025-05-27T12:21:46.275Z') ***, *** _id: ObjectId('6835ae5a5b8060b0195e73a6'), msg: 'kbcli-test-data-mvdokf10', time: ISODate('2025-05-27T12:21:46.476Z') ***, *** _id: ObjectId('6835ae5a5b8060b0195e73a7'), msg: 'kbcli-test-data-mvdokf11', time: ISODate('2025-05-27T12:21:46.674Z') ***, *** _id: ObjectId('6835ae5a5b8060b0195e73a8'), msg: 'kbcli-test-data-mvdokf12', time: ISODate('2025-05-27T12:21:46.774Z') ***, *** _id: ObjectId('6835ae5a5b8060b0195e73a9'), msg: 'kbcli-test-data-mvdokf13', time: ISODate('2025-05-27T12:21:46.872Z') ***, *** _id: ObjectId('6835ae5a5b8060b0195e73aa'), msg: 'kbcli-test-data-mvdokf14', time: ISODate('2025-05-27T12:21:46.974Z') ***, *** _id: ObjectId('6835ae5a5b8060b0195e73ab'), msg: 'kbcli-test-data-mvdokf15', time: ISODate('2025-05-27T12:21:46.982Z') *** ] mongodb-mvdokf-backup-mongodb [direct: primary] admin> dump backup check data Success cluster connect `kubectl get secrets -l app.kubernetes.io/instance=mongodb-mvdokf-backup` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.username***"` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.password***"` `kubectl get secrets mongodb-mvdokf-backup-mongodb-account-root -o jsonpath="***.data.port***"` DB_USERNAME:root;DB_PASSWORD:9T2528tMs88Kt1zU;DB_PORT:27017;DB_DATABASE: No resources found in ns-jcrws namespace. `echo " echo \"rs.status()\" | mongosh --host mongodb-mvdokf-backup-mongodb-mongodb.ns-jcrws.svc.cluster.local --port 27017 -u root -p 9T2528tMs88Kt1zU --authenticationDatabase admin admin " | kubectl exec -it mongodb-mvdokf-backup-mongodb-0 --namespace ns-jcrws -- bash ` Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) Unable to use a TTY - input is not a terminal or the right kind of file Current Mongosh Log ID: 6835af05a3871b1dbb5e739b Connecting to: mongodb://@mongodb-mvdokf-backup-mongodb-mongodb.ns-jcrws.svc.cluster.local:27017/admin?directConnection=true&authSource=admin&appName=mongosh+2.3.0 Using MongoDB: 7.0.12 Using Mongosh: 2.3.0 For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ ------ The server generated these startup warnings when booting 2025-05-27T12:22:44.386+00:00: You are running this process as the root user, which is not recommended 2025-05-27T12:22:44.386+00:00: vm.max_map_count is too low ------ mongodb-mvdokf-backup-mongodb [direct: primary] admin> *** set: 'mongodb-mvdokf-backup-mongodb', date: ISODate('2025-05-27T12:24:46.572Z'), myState: 1, term: Long('1'), syncSourceHost: '', syncSourceId: -1, heartbeatIntervalMillis: Long('2000'), majorityVoteCount: 2, writeMajorityCount: 2, votingMembersCount: 3, writableVotingMembersCount: 3, optimes: *** lastCommittedOpTime: *** ts: Timestamp(*** t: 1748348686, i: 1 ***), t: Long('1') ***, lastCommittedWallTime: ISODate('2025-05-27T12:24:46.076Z'), readConcernMajorityOpTime: *** ts: Timestamp(*** t: 1748348686, i: 1 ***), t: Long('1') ***, appliedOpTime: *** ts: Timestamp(*** t: 1748348686, i: 1 ***), t: Long('1') ***, durableOpTime: *** ts: Timestamp(*** t: 1748348686, i: 1 ***), t: Long('1') ***, lastAppliedWallTime: ISODate('2025-05-27T12:24:46.076Z'), lastDurableWallTime: ISODate('2025-05-27T12:24:46.076Z') ***, lastStableRecoveryTimestamp: Timestamp(*** t: 1748348676, i: 1 ***), electionCandidateMetrics: *** lastElectionReason: 'electionTimeout', lastElectionDate: ISODate('2025-05-27T12:22:45.674Z'), electionTerm: Long('1'), lastCommittedOpTimeAtElection: *** ts: Timestamp(*** t: 1748348565, i: 1 ***), t: Long('-1') ***, lastSeenOpTimeAtElection: *** ts: Timestamp(*** t: 1748348565, i: 1 ***), t: Long('-1') ***, numVotesNeeded: 1, priorityAtElection: 2, electionTimeoutMillis: Long('10000'), newTermStartDate: ISODate('2025-05-27T12:22:45.888Z'), wMajorityWriteAvailabilityDate: ISODate('2025-05-27T12:22:45.992Z') ***, members: [ *** _id: 0, name: 'mongodb-mvdokf-backup-mongodb-0.mongodb-mvdokf-backup-mongodb-headless.ns-jcrws.svc:27017', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 128, optime: *** ts: Timestamp(*** t: 1748348686, i: 1 ***), t: Long('1') ***, optimeDate: ISODate('2025-05-27T12:24:46.000Z'), lastAppliedWallTime: ISODate('2025-05-27T12:24:46.076Z'), lastDurableWallTime: ISODate('2025-05-27T12:24:46.076Z'), syncSourceHost: '', syncSourceId: -1, infoMessage: '', electionTime: Timestamp(*** t: 1748348565, i: 2 ***), electionDate: ISODate('2025-05-27T12:22:45.000Z'), configVersion: 5, configTerm: 1, self: true, lastHeartbeatMessage: '' ***, *** _id: 1, name: 'mongodb-mvdokf-backup-mongodb-1.mongodb-mvdokf-backup-mongodb-headless.ns-jcrws.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 98, optime: *** ts: Timestamp(*** t: 1748348676, i: 1 ***), t: Long('1') ***, optimeDurable: *** ts: Timestamp(*** t: 1748348676, i: 1 ***), t: Long('1') ***, optimeDate: ISODate('2025-05-27T12:24:36.000Z'), optimeDurableDate: ISODate('2025-05-27T12:24:36.000Z'), lastAppliedWallTime: ISODate('2025-05-27T12:24:46.076Z'), lastDurableWallTime: ISODate('2025-05-27T12:24:46.076Z'), lastHeartbeat: ISODate('2025-05-27T12:24:45.972Z'), lastHeartbeatRecv: ISODate('2025-05-27T12:24:45.377Z'), pingMs: Long('35'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-mvdokf-backup-mongodb-0.mongodb-mvdokf-backup-mongodb-headless.ns-jcrws.svc:27017', syncSourceId: 0, infoMessage: '', configVersion: 5, configTerm: 1 ***, *** _id: 2, name: 'mongodb-mvdokf-backup-mongodb-2.mongodb-mvdokf-backup-mongodb-headless.ns-jcrws.svc:27017', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 79, optime: *** ts: Timestamp(*** t: 1748348676, i: 1 ***), t: Long('1') ***, optimeDurable: *** ts: Timestamp(*** t: 1748348676, i: 1 ***), t: Long('1') ***, optimeDate: ISODate('2025-05-27T12:24:36.000Z'), optimeDurableDate: ISODate('2025-05-27T12:24:36.000Z'), lastAppliedWallTime: ISODate('2025-05-27T12:24:46.076Z'), lastDurableWallTime: ISODate('2025-05-27T12:24:46.076Z'), lastHeartbeat: ISODate('2025-05-27T12:24:45.973Z'), lastHeartbeatRecv: ISODate('2025-05-27T12:24:45.872Z'), pingMs: Long('35'), lastHeartbeatMessage: '', syncSourceHost: 'mongodb-mvdokf-backup-mongodb-1.mongodb-mvdokf-backup-mongodb-headless.ns-jcrws.svc:27017', syncSourceId: 1, infoMessage: '', configVersion: 5, configTerm: 1 *** ], ok: 1, '$clusterTime': *** clusterTime: Timestamp(*** t: 1748348686, i: 1 ***), signature: *** hash: Binary.createFromBase64('v5Jgcb2qcbrGvj6l/AvfFoXrmoc=', 0), keyId: Long('7509099908683530247') *** ***, operationTime: Timestamp(*** t: 1748348686, i: 1 ***) *** mongodb-mvdokf-backup-mongodb [direct: primary] admin> connect cluster Success delete cluster mongodb-mvdokf-backup `kbcli cluster delete mongodb-mvdokf-backup --auto-approve --namespace ns-jcrws ` Cluster mongodb-mvdokf-backup deleted pod_info:mongodb-mvdokf-backup-mongodb-0 2/2 Terminating 0 2m27s mongodb-mvdokf-backup-mongodb-1 2/2 Terminating 0 2m4s mongodb-mvdokf-backup-mongodb-2 2/2 Terminating 0 100s No resources found in ns-jcrws namespace. delete cluster pod done No resources found in ns-jcrws namespace. check cluster resource non-exist OK: pvc No resources found in ns-jcrws namespace. delete cluster done No resources found in ns-jcrws namespace. No resources found in ns-jcrws namespace. No resources found in ns-jcrws namespace. cluster delete backup `kubectl patch -p '***"metadata":***"finalizers":[]***' --type=merge backups backup-ns-jcrws-mongodb-mvdokf-20250527202201 --namespace ns-jcrws ` backup.dataprotection.kubeblocks.io/backup-ns-jcrws-mongodb-mvdokf-20250527202201 patched `kbcli cluster delete-backup mongodb-mvdokf --name backup-ns-jcrws-mongodb-mvdokf-20250527202201 --force --auto-approve --namespace ns-jcrws ` Backup backup-ns-jcrws-mongodb-mvdokf-20250527202201 deleted No opsrequests found in ns-jcrws namespace. cluster list-logs `kbcli cluster list-logs mongodb-mvdokf --namespace ns-jcrws ` No log files found. Error from server (NotFound): pods "mongodb-mvdokf-mongodb-0" not found cluster logs `kbcli cluster logs mongodb-mvdokf --tail 30 --namespace ns-jcrws ` Defaulted container "mongodb" out of: mongodb, kbagent, init-syncer (init), kbagent-worker (init) 2025-05-27T12:18:57Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:18:57Z INFO HA Users are created. 2025-05-27T12:18:58Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:18:58Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:18:59Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:00Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:00Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:01Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:02Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:02Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:03Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:04Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:04Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:05Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:06Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:06Z INFO DCS-K8S Found switchover Setting ***"configmap": ***"candidate":"","leader":"mongodb-mvdokf-mongodb-2"*** 2025-05-27T12:19:06Z INFO HA End switchover record failed ***"error": "no matched ha record"*** 2025-05-27T12:19:06Z INFO HA Refresh leader ttl 2025-05-27T12:19:56Z INFO HA This member is Cluster's leader 2025-05-27T12:19:56Z DEBUG HA Refresh leader ttl 2025-05-27T12:20:56Z INFO HA This member is Cluster's leader 2025-05-27T12:20:56Z DEBUG HA Refresh leader ttl 2025-05-27T12:22:00Z INFO HA This member is Cluster's leader 2025-05-27T12:22:00Z DEBUG HA Refresh leader ttl 2025-05-27T12:23:00Z INFO HA This member is Cluster's leader 2025-05-27T12:23:00Z DEBUG HA Refresh leader ttl 2025-05-27T12:24:00Z INFO HA This member is Cluster's leader 2025-05-27T12:24:00Z DEBUG HA Refresh leader ttl 2025-05-27T12:25:00Z INFO HA This member is Cluster's leader 2025-05-27T12:25:00Z DEBUG HA Refresh leader ttl cluster logs running `kbcli cluster logs mongodb-mvdokf --tail 30 --file-type=running --namespace ns-jcrws ` ==> /data/mongodb/logs/mongodb.log <== ***"t":***"$date":"2025-05-27T12:25:28.455+00:00"***,"s":"I", "c":"-", "id":20883, "ctx":"conn7095","msg":"Interrupted operation as its client disconnected","attr":***"opId":25377*** ***"t":***"$date":"2025-05-27T12:25:28.455+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7096","msg":"Connection ended","attr":***"remote":"172.31.3.220:34952","uuid":***"uuid":***"$uuid":"fc777e5b-2fca-4005-94eb-21e769c170e5"***,"connectionId":7096,"connectionCount":24*** ***"t":***"$date":"2025-05-27T12:25:28.455+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7097","msg":"Connection ended","attr":***"remote":"172.31.3.220:34958","uuid":***"uuid":***"$uuid":"1fa6b1b7-dca8-4b8d-9f66-ef5a704c9b56"***,"connectionId":7097,"connectionCount":23*** ***"t":***"$date":"2025-05-27T12:25:28.455+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7095","msg":"Connection ended","attr":***"remote":"172.31.3.220:34950","uuid":***"uuid":***"$uuid":"0828595d-171a-4f0f-ad5c-df72f36b9348"***,"connectionId":7095,"connectionCount":22*** ***"t":***"$date":"2025-05-27T12:25:28.468+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.3.217:55120","uuid":***"uuid":***"$uuid":"26235885-8b4e-408b-a0fe-bf68883185b3"***,"connectionId":7098,"connectionCount":23*** ***"t":***"$date":"2025-05-27T12:25:28.469+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.3.217:55124","uuid":***"uuid":***"$uuid":"58eee912-af6e-440d-bb62-16df029a808e"***,"connectionId":7099,"connectionCount":24*** ***"t":***"$date":"2025-05-27T12:25:28.469+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7098","msg":"client metadata","attr":***"remote":"172.31.3.217:55120","client":"conn7098","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T12:25:28.469+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7099","msg":"client metadata","attr":***"remote":"172.31.3.217:55124","client":"conn7099","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T12:25:28.472+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.3.217:55138","uuid":***"uuid":***"$uuid":"84c8dc67-610e-49ec-b4f3-31a5a793f9c4"***,"connectionId":7100,"connectionCount":25*** ***"t":***"$date":"2025-05-27T12:25:28.472+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7100","msg":"client metadata","attr":***"remote":"172.31.3.217:55138","client":"conn7100","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T12:25:28.473+00:00"***,"s":"I", "c":"ACCESS", "id":6788604, "ctx":"conn7100","msg":"Auth metrics report","attr":***"metric":"acquireUser","micros":0*** ***"t":***"$date":"2025-05-27T12:25:28.478+00:00"***,"s":"I", "c":"ACCESS", "id":5286306, "ctx":"conn7100","msg":"Successfully authenticated","attr":***"client":"172.31.3.217:55138","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","result":0,"metrics":***"conversation_duration":***"micros":5792,"summary":***"0":***"step":1,"step_total":2,"duration_micros":69***,"1":***"step":2,"step_total":2,"duration_micros":27***,"extraInfo":*** ***"t":***"$date":"2025-05-27T12:25:28.479+00:00"***,"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn7100","msg":"Received first command on ingress connection since session start or auth handshake","attr":***"elapsedMillis":0*** ***"t":***"$date":"2025-05-27T12:25:28.480+00:00"***,"s":"I", "c":"-", "id":20883, "ctx":"conn7098","msg":"Interrupted operation as its client disconnected","attr":***"opId":25387*** ***"t":***"$date":"2025-05-27T12:25:28.480+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7100","msg":"Connection ended","attr":***"remote":"172.31.3.217:55138","uuid":***"uuid":***"$uuid":"84c8dc67-610e-49ec-b4f3-31a5a793f9c4"***,"connectionId":7100,"connectionCount":24*** ***"t":***"$date":"2025-05-27T12:25:28.480+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7099","msg":"Connection ended","attr":***"remote":"172.31.3.217:55124","uuid":***"uuid":***"$uuid":"58eee912-af6e-440d-bb62-16df029a808e"***,"connectionId":7099,"connectionCount":23*** ***"t":***"$date":"2025-05-27T12:25:28.480+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7098","msg":"Connection ended","attr":***"remote":"172.31.3.217:55120","uuid":***"uuid":***"$uuid":"26235885-8b4e-408b-a0fe-bf68883185b3"***,"connectionId":7098,"connectionCount":22*** ***"t":***"$date":"2025-05-27T12:25:28.538+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.3.220:34964","uuid":***"uuid":***"$uuid":"d510fee8-7519-4597-8916-d0bd6817bab2"***,"connectionId":7101,"connectionCount":23*** ***"t":***"$date":"2025-05-27T12:25:28.538+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.3.220:34978","uuid":***"uuid":***"$uuid":"1c51365b-c27e-44da-be36-3ae977ed0002"***,"connectionId":7102,"connectionCount":24*** ***"t":***"$date":"2025-05-27T12:25:28.538+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7101","msg":"client metadata","attr":***"remote":"172.31.3.220:34964","client":"conn7101","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T12:25:28.539+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7102","msg":"client metadata","attr":***"remote":"172.31.3.220:34978","client":"conn7102","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T12:25:28.542+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.3.220:34992","uuid":***"uuid":***"$uuid":"4c0c37cb-ce4a-4cfb-bc9b-3f3499c49a0d"***,"connectionId":7103,"connectionCount":25*** ***"t":***"$date":"2025-05-27T12:25:28.542+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7103","msg":"client metadata","attr":***"remote":"172.31.3.220:34992","client":"conn7103","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T12:25:28.542+00:00"***,"s":"I", "c":"ACCESS", "id":6788604, "ctx":"conn7103","msg":"Auth metrics report","attr":***"metric":"acquireUser","micros":0*** ***"t":***"$date":"2025-05-27T12:25:28.549+00:00"***,"s":"I", "c":"ACCESS", "id":5286306, "ctx":"conn7103","msg":"Successfully authenticated","attr":***"client":"172.31.3.220:34992","isSpeculative":true,"isClusterMember":false,"mechanism":"SCRAM-SHA-256","user":"root","db":"admin","result":0,"metrics":***"conversation_duration":***"micros":6395,"summary":***"0":***"step":1,"step_total":2,"duration_micros":64***,"1":***"step":2,"step_total":2,"duration_micros":25***,"extraInfo":*** ***"t":***"$date":"2025-05-27T12:25:28.549+00:00"***,"s":"I", "c":"NETWORK", "id":6788700, "ctx":"conn7103","msg":"Received first command on ingress connection since session start or auth handshake","attr":***"elapsedMillis":0*** ***"t":***"$date":"2025-05-27T12:25:28.636+00:00"***,"s":"I", "c":"-", "id":20883, "ctx":"conn7101","msg":"Interrupted operation as its client disconnected","attr":***"opId":25395*** ***"t":***"$date":"2025-05-27T12:25:28.636+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7103","msg":"Connection ended","attr":***"remote":"172.31.3.220:34992","uuid":***"uuid":***"$uuid":"4c0c37cb-ce4a-4cfb-bc9b-3f3499c49a0d"***,"connectionId":7103,"connectionCount":24*** ***"t":***"$date":"2025-05-27T12:25:28.636+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7102","msg":"Connection ended","attr":***"remote":"172.31.3.220:34978","uuid":***"uuid":***"$uuid":"1c51365b-c27e-44da-be36-3ae977ed0002"***,"connectionId":7102,"connectionCount":23*** ***"t":***"$date":"2025-05-27T12:25:28.636+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7101","msg":"Connection ended","attr":***"remote":"172.31.3.220:34964","uuid":***"uuid":***"$uuid":"d510fee8-7519-4597-8916-d0bd6817bab2"***,"connectionId":7101,"connectionCount":22*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T10-44-28 <== ***"t":***"$date":"2025-05-27T10:43:55.893+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7025","msg":"Connection ended","attr":***"remote":"172.31.13.37:49244","uuid":***"uuid":***"$uuid":"cd84dc1d-01eb-4af9-b7fc-14fc5c02c80b"***,"connectionId":7025,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:55.893+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7024","msg":"Connection ended","attr":***"remote":"172.31.13.37:49232","uuid":***"uuid":***"$uuid":"ec762ab0-85ba-4f37-b4f0-f2c07e396f8d"***,"connectionId":7024,"connectionCount":19*** ***"t":***"$date":"2025-05-27T10:43:56.031+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.13.37:49252","uuid":***"uuid":***"$uuid":"3194220d-ae06-4c5b-89f1-eb84fdca854f"***,"connectionId":7026,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.031+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.13.37:49258","uuid":***"uuid":***"$uuid":"8b77b87c-2978-4ed5-afa3-04e9eb3c8bbe"***,"connectionId":7027,"connectionCount":21*** ***"t":***"$date":"2025-05-27T10:43:56.032+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7026","msg":"client metadata","attr":***"remote":"172.31.13.37:49252","client":"conn7026","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T10:43:56.032+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7027","msg":"client metadata","attr":***"remote":"172.31.13.37:49258","client":"conn7027","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T10:43:56.033+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7026","msg":"Connection ended","attr":***"remote":"172.31.13.37:49252","uuid":***"uuid":***"$uuid":"3194220d-ae06-4c5b-89f1-eb84fdca854f"***,"connectionId":7026,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.033+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7027","msg":"Connection ended","attr":***"remote":"172.31.13.37:49258","uuid":***"uuid":***"$uuid":"8b77b87c-2978-4ed5-afa3-04e9eb3c8bbe"***,"connectionId":7027,"connectionCount":19*** ***"t":***"$date":"2025-05-27T10:43:56.332+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.13.37:49264","uuid":***"uuid":***"$uuid":"d6d60763-dd14-4cfb-9d6f-7499b8596bc8"***,"connectionId":7028,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.332+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7028","msg":"client metadata","attr":***"remote":"172.31.13.37:49264","client":"conn7028","negotiatedCompressors":["snappy","zstd","zlib"],"doc":***"driver":***"name":"NetworkInterfaceTL-ReplNetwork","version":"7.0.12"***,"os":***"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"22.04"*** ***"t":***"$date":"2025-05-27T10:43:56.333+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7028","msg":"Connection ended","attr":***"remote":"172.31.13.37:49264","uuid":***"uuid":***"$uuid":"d6d60763-dd14-4cfb-9d6f-7499b8596bc8"***,"connectionId":7028,"connectionCount":19*** ***"t":***"$date":"2025-05-27T10:43:56.578+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.12.156:50224","uuid":***"uuid":***"$uuid":"fea16e44-9cf8-4222-b6bc-c6d76a9b8b5b"***,"connectionId":7029,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.579+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.12.156:50234","uuid":***"uuid":***"$uuid":"dc468ecb-c7fc-4445-a24c-4df4efbc10b4"***,"connectionId":7030,"connectionCount":21*** ***"t":***"$date":"2025-05-27T10:43:56.579+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7029","msg":"client metadata","attr":***"remote":"172.31.12.156:50224","client":"conn7029","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T10:43:56.579+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7030","msg":"client metadata","attr":***"remote":"172.31.12.156:50234","client":"conn7030","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T10:43:56.579+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7030","msg":"Connection ended","attr":***"remote":"172.31.12.156:50234","uuid":***"uuid":***"$uuid":"dc468ecb-c7fc-4445-a24c-4df4efbc10b4"***,"connectionId":7030,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.580+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7029","msg":"Connection ended","attr":***"remote":"172.31.12.156:50224","uuid":***"uuid":***"$uuid":"fea16e44-9cf8-4222-b6bc-c6d76a9b8b5b"***,"connectionId":7029,"connectionCount":19*** ***"t":***"$date":"2025-05-27T10:43:56.762+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.12.156:50238","uuid":***"uuid":***"$uuid":"07716902-356f-4fe1-9fa4-bf251a8c7e91"***,"connectionId":7031,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.762+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.12.156:50246","uuid":***"uuid":***"$uuid":"93e0362e-b8f8-4e59-93c1-66ba97df9319"***,"connectionId":7032,"connectionCount":21*** ***"t":***"$date":"2025-05-27T10:43:56.762+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7031","msg":"client metadata","attr":***"remote":"172.31.12.156:50238","client":"conn7031","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T10:43:56.763+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7032","msg":"client metadata","attr":***"remote":"172.31.12.156:50246","client":"conn7032","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T10:43:56.763+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7031","msg":"Connection ended","attr":***"remote":"172.31.12.156:50238","uuid":***"uuid":***"$uuid":"07716902-356f-4fe1-9fa4-bf251a8c7e91"***,"connectionId":7031,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.764+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7032","msg":"Connection ended","attr":***"remote":"172.31.12.156:50246","uuid":***"uuid":***"$uuid":"93e0362e-b8f8-4e59-93c1-66ba97df9319"***,"connectionId":7032,"connectionCount":19*** ***"t":***"$date":"2025-05-27T10:43:56.892+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.13.37:49270","uuid":***"uuid":***"$uuid":"1cb6595f-5b81-4ea3-9793-616bb0d69620"***,"connectionId":7033,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.892+00:00"***,"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":***"remote":"172.31.13.37:49274","uuid":***"uuid":***"$uuid":"75c41686-9368-4192-8295-9cde20203b90"***,"connectionId":7034,"connectionCount":21*** ***"t":***"$date":"2025-05-27T10:43:56.893+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7033","msg":"client metadata","attr":***"remote":"172.31.13.37:49270","client":"conn7033","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T10:43:56.893+00:00"***,"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn7034","msg":"client metadata","attr":***"remote":"172.31.13.37:49274","client":"conn7034","negotiatedCompressors":[],"doc":***"driver":***"name":"mongo-go-driver","version":"v1.11.6"***,"os":***"type":"linux","architecture":"amd64"***,"platform":"go1.22.5"*** ***"t":***"$date":"2025-05-27T10:43:56.893+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7034","msg":"Connection ended","attr":***"remote":"172.31.13.37:49274","uuid":***"uuid":***"$uuid":"75c41686-9368-4192-8295-9cde20203b90"***,"connectionId":7034,"connectionCount":20*** ***"t":***"$date":"2025-05-27T10:43:56.893+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn7033","msg":"Connection ended","attr":***"remote":"172.31.13.37:49270","uuid":***"uuid":***"$uuid":"1cb6595f-5b81-4ea3-9793-616bb0d69620"***,"connectionId":7033,"connectionCount":19*** ***"t":***"$date":"2025-05-27T10:43:57.082+00:00"***,"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn30","msg":"Connection ended","attr":***"remote":"127.0.0.1:58162","uuid":***"uuid":***"$uuid":"67a7f7e0-1c2d-4925-bc0a-ee9bf68e10d9"***,"connectionId":30,"connectionCount":18*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T10-49-06 <== ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T10:48:57.369+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T10:48:57.370+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T10:48:57.370+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T10:48:57.370+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T10:48:57.370+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T10:48:57.371+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T10:48:57.371+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T10:48:57.371+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T10:48:57.372+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T10:48:57.374+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748342937,"ts_usec":374670,"thread":"14:0x7f8a89818640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T10:48:57.375+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748342937,"ts_usec":375784,"thread":"14:0x7f8a89818640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 132, snapshot max: 132 snapshot count: 0, oldest timestamp: (1748342635, 1) , meta checkpoint timestamp: (1748342935, 1) base write gen: 62"*** ***"t":***"$date":"2025-05-27T10:48:57.471+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748342937,"ts_usec":471198,"thread":"14:0x7f8a89818640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 96 milliseconds"*** ***"t":***"$date":"2025-05-27T10:48:57.471+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748342937,"ts_usec":471409,"thread":"14:0x7f8a89818640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 99ms, including 2ms for the rollback to stable, and 96ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T10:48:57.478+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":106*** ***"t":***"$date":"2025-05-27T10:48:57.478+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T10:48:57.478+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T10:48:57.478+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T10:48:57.562+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T10:48:57.562+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"1 ms","Time spent in quiesce mode":"15011 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"1 ms","Shut down WaitForMajorityService":"0 ms","Shut down the logical session cache":"0 ms","Shut down the Query Analysis Sampler":"1 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"1 ms","Shut down the replica set aware services":"1 ms","Shut down replication":"0 ms","Shut down external state":"584 ms","Shut down replication executor":"0 ms","Join replication executor":"1 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"1 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the logical time validator":"1 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"109 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"84 ms","shutdownTask total elapsed time":"15796 ms"*** ***"t":***"$date":"2025-05-27T10:48:57.562+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T10-58-07 <== ***"t":***"$date":"2025-05-27T10:57:41.363+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T10:57:41.363+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T10:57:41.364+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T10:57:41.365+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T10:57:41.365+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T10:57:41.365+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T10:57:41.366+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T10:57:41.368+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748343461,"ts_usec":368800,"thread":"14:0x7f339469a640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T10:57:41.370+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748343461,"ts_usec":369995,"thread":"14:0x7f339469a640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 131, snapshot max: 131 snapshot count: 0, oldest timestamp: (1748343136, 1) , meta checkpoint timestamp: (1748343436, 1) base write gen: 91"*** ***"t":***"$date":"2025-05-27T10:57:41.382+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748343461,"ts_usec":382733,"thread":"14:0x7f339469a640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 13 milliseconds"*** ***"t":***"$date":"2025-05-27T10:57:41.382+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748343461,"ts_usec":382939,"thread":"14:0x7f339469a640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 16ms, including 2ms for the rollback to stable, and 13ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T10:57:41.390+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":24*** ***"t":***"$date":"2025-05-27T10:57:41.390+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T10:57:41.390+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T10:57:41.390+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T10:57:41.394+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T10:57:41.460+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"2 ms","Time spent in quiesce mode":"15014 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"0 ms","Shut down WaitForMajorityService":"1 ms","Shut down the logical session cache":"0 ms","Shut down the Query Analysis Sampler":"1 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"0 ms","Shut down replication":"0 ms","Shut down external state":"554 ms","Shut down replication executor":"0 ms","Join replication executor":"1 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"1 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"1 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"26 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"15673 ms"*** ***"t":***"$date":"2025-05-27T10:57:41.460+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T11-06-28 <== ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T11:06:21.477+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T11:06:21.478+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T11:06:21.478+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T11:06:21.478+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T11:06:21.479+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T11:06:21.481+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748343981,"ts_usec":481814,"thread":"14:0x7f8cdf959640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T11:06:21.483+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748343981,"ts_usec":483185,"thread":"14:0x7f8cdf959640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 133, snapshot max: 133 snapshot count: 0, oldest timestamp: (1748343677, 2) , meta checkpoint timestamp: (1748343977, 2) base write gen: 136"*** ***"t":***"$date":"2025-05-27T11:06:21.495+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748343981,"ts_usec":495564,"thread":"14:0x7f8cdf959640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 13 milliseconds"*** ***"t":***"$date":"2025-05-27T11:06:21.495+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748343981,"ts_usec":495791,"thread":"14:0x7f8cdf959640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 16ms, including 2ms for the rollback to stable, and 13ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T11:06:21.576+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":97*** ***"t":***"$date":"2025-05-27T11:06:21.576+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T11:06:21.576+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T11:06:21.576+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T11:06:21.580+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T11:06:21.581+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"3 ms","Time spent in quiesce mode":"15012 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"1 ms","Shut down WaitForMajorityService":"1 ms","Shut down the logical session cache":"0 ms","Shut down the Query Analysis Sampler":"0 ms","Shut down the transport layer":"1 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"0 ms","Shut down replication":"0 ms","Shut down external state":"342 ms","Shut down replication executor":"0 ms","Join replication executor":"2 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"1 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"1 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"99 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"15468 ms"*** ***"t":***"$date":"2025-05-27T11:06:21.581+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T11-54-43 <== ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T11:54:37.504+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T11:54:37.505+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T11:54:37.505+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T11:54:37.505+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T11:54:37.505+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T11:54:37.505+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T11:54:37.506+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T11:54:37.509+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748346877,"ts_usec":509274,"thread":"14:0x7f852f99e640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T11:54:37.510+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748346877,"ts_usec":510925,"thread":"14:0x7f852f99e640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 856, snapshot max: 856 snapshot count: 0, oldest timestamp: (1748346571, 1) , meta checkpoint timestamp: (1748346871, 1) base write gen: 183"*** ***"t":***"$date":"2025-05-27T11:54:37.526+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748346877,"ts_usec":526115,"thread":"14:0x7f852f99e640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 16 milliseconds"*** ***"t":***"$date":"2025-05-27T11:54:37.526+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748346877,"ts_usec":526382,"thread":"14:0x7f852f99e640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 19ms, including 2ms for the rollback to stable, and 16ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T11:54:37.577+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":71*** ***"t":***"$date":"2025-05-27T11:54:37.577+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T11:54:37.577+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T11:54:37.577+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T11:54:37.582+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T11:54:37.582+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"3 ms","Time spent in quiesce mode":"15013 ms","Shut down FLE Crud subsystem":"1 ms","Shut down MirrorMaestro":"1 ms","Shut down WaitForMajorityService":"0 ms","Shut down the logical session cache":"0 ms","Shut down the Query Analysis Sampler":"1 ms","Shut down the transport layer":"1 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"1 ms","Shut down replication":"0 ms","Shut down external state":"1002 ms","Shut down replication executor":"1 ms","Join replication executor":"0 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"1 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"73 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"5 ms","shutdownTask total elapsed time":"16105 ms"*** ***"t":***"$date":"2025-05-27T11:54:37.582+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T12-05-24 <== ***"t":***"$date":"2025-05-27T12:05:13.475+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T12:05:13.475+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T12:05:13.476+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T12:05:13.477+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T12:05:13.477+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T12:05:13.477+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:05:13.477+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:05:13.478+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T12:05:13.480+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347513,"ts_usec":480587,"thread":"14:0x7f80a85e0640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T12:05:13.482+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347513,"ts_usec":482045,"thread":"14:0x7f80a85e0640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 187, snapshot max: 187 snapshot count: 0, oldest timestamp: (1748347208, 1) , meta checkpoint timestamp: (1748347508, 1) base write gen: 453"*** ***"t":***"$date":"2025-05-27T12:05:13.494+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347513,"ts_usec":494900,"thread":"14:0x7f80a85e0640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 14 milliseconds"*** ***"t":***"$date":"2025-05-27T12:05:13.495+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347513,"ts_usec":495172,"thread":"14:0x7f80a85e0640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 16ms, including 2ms for the rollback to stable, and 14ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T12:05:13.503+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":25*** ***"t":***"$date":"2025-05-27T12:05:13.503+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T12:05:13.503+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T12:05:13.503+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T12:05:13.507+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T12:05:13.572+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"4 ms","Time spent in quiesce mode":"15011 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"1 ms","Shut down WaitForMajorityService":"0 ms","Shut down the logical session cache":"1 ms","Shut down the Query Analysis Sampler":"1 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"1 ms","Shut down the replica set aware services":"1 ms","Shut down replication":"0 ms","Shut down external state":"631 ms","Shut down replication executor":"0 ms","Join replication executor":"2 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"1 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"27 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"15751 ms"*** ***"t":***"$date":"2025-05-27T12:05:13.572+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T12-07-42 <== ***"t":***"$date":"2025-05-27T12:07:40.978+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:07:40.978+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:07:40.978+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T12:07:40.978+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:07:40.978+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:07:40.979+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:07:40.981+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T12:07:40.983+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347660,"ts_usec":983669,"thread":"14:0x7f60f0c5d640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T12:07:40.986+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347660,"ts_usec":986163,"thread":"14:0x7f60f0c5d640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 53, snapshot max: 53 snapshot count: 0, oldest timestamp: (1748347357, 1) , meta checkpoint timestamp: (1748347657, 1) base write gen: 517"*** ***"t":***"$date":"2025-05-27T12:07:41.000+00:00"***,"s":"W", "c":"REPL", "id":6100702, "ctx":"ftdc","msg":"Failed to get last stable recovery timestamp due to lock acquire timeout. Note this is expected if shutdown is in progress."*** ***"t":***"$date":"2025-05-27T12:07:41.001+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347661,"ts_usec":1285,"thread":"14:0x7f60f0c5d640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 17 milliseconds"*** ***"t":***"$date":"2025-05-27T12:07:41.001+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347661,"ts_usec":1488,"thread":"14:0x7f60f0c5d640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 20ms, including 2ms for the rollback to stable, and 17ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T12:07:41.080+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":99*** ***"t":***"$date":"2025-05-27T12:07:41.080+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T12:07:41.080+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T12:07:41.080+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T12:07:41.082+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T12:07:41.082+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"15016 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"0 ms","Shut down WaitForMajorityService":"1 ms","Shut down the logical session cache":"0 ms","Shut down the Query Analysis Sampler":"1 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"1 ms","Shut down replication":"0 ms","Shut down external state":"36 ms","Shut down replication executor":"0 ms","Join replication executor":"1 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"1 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"101 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"15163 ms"*** ***"t":***"$date":"2025-05-27T12:07:41.082+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T12-10-38 <== ***"t":***"$date":"2025-05-27T12:10:27.546+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T12:10:27.546+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T12:10:27.547+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T12:10:27.548+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T12:10:27.548+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:10:27.548+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:10:27.549+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T12:10:27.552+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347827,"ts_usec":552699,"thread":"14:0x7f5e8f761640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T12:10:27.554+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347827,"ts_usec":554030,"thread":"14:0x7f5e8f761640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 64, snapshot max: 64 snapshot count: 0, oldest timestamp: (1748347522, 1) , meta checkpoint timestamp: (1748347822, 1) base write gen: 537"*** ***"t":***"$date":"2025-05-27T12:10:27.569+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347827,"ts_usec":569888,"thread":"14:0x7f5e8f761640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 17 milliseconds"*** ***"t":***"$date":"2025-05-27T12:10:27.572+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748347827,"ts_usec":572547,"thread":"14:0x7f5e8f761640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 22ms, including 2ms for the rollback to stable, and 17ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T12:10:27.581+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":32*** ***"t":***"$date":"2025-05-27T12:10:27.581+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T12:10:27.581+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T12:10:27.581+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T12:10:27.587+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T12:10:27.588+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"15015 ms","Shut down FLE Crud subsystem":"1 ms","Shut down MirrorMaestro":"1 ms","Shut down WaitForMajorityService":"0 ms","Shut down the logical session cache":"1 ms","Shut down the Query Analysis Sampler":"0 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"1 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"1 ms","Shut down replication":"0 ms","Shut down external state":"291 ms","Shut down replication executor":"0 ms","Join replication executor":"2 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"1 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"1 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"34 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"15356 ms"*** ***"t":***"$date":"2025-05-27T12:10:27.588+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T12-13-24 <== ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T12:13:23.145+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T12:13:23.146+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T12:13:23.146+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T12:13:23.146+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:13:23.146+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:13:23.147+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T12:13:23.149+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348003,"ts_usec":149937,"thread":"15:0x7f5929d8f640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T12:13:23.152+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348003,"ts_usec":152085,"thread":"15:0x7f5929d8f640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 69, snapshot max: 69 snapshot count: 0, oldest timestamp: (1748347698, 2) , meta checkpoint timestamp: (1748347998, 2) base write gen: 557"*** ***"t":***"$date":"2025-05-27T12:13:23.171+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348003,"ts_usec":171467,"thread":"15:0x7f5929d8f640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 21 milliseconds"*** ***"t":***"$date":"2025-05-27T12:13:23.171+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348003,"ts_usec":171734,"thread":"15:0x7f5929d8f640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 24ms, including 2ms for the rollback to stable, and 21ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T12:13:23.178+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":31*** ***"t":***"$date":"2025-05-27T12:13:23.178+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T12:13:23.178+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T12:13:23.178+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T12:13:23.183+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T12:13:23.183+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"15015 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"0 ms","Shut down WaitForMajorityService":"1 ms","Shut down the logical session cache":"0 ms","Shut down the Query Analysis Sampler":"1 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"0 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"0 ms","Shut down replication":"0 ms","Shut down external state":"159 ms","Shut down replication executor":"0 ms","Join replication executor":"2 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"0 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"1 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"1 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"33 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"0 ms","shutdownTask total elapsed time":"15219 ms"*** ***"t":***"$date":"2025-05-27T12:13:23.183+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T12-16-06 <== ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:16:04.443+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T12:16:04.444+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T12:16:04.444+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T12:16:04.444+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T12:16:04.444+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:16:04.444+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:16:04.445+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T12:16:04.447+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348164,"ts_usec":447700,"thread":"15:0x7f6e80898640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T12:16:04.448+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348164,"ts_usec":448928,"thread":"15:0x7f6e80898640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 58, snapshot max: 58 snapshot count: 0, oldest timestamp: (1748347854, 1) , meta checkpoint timestamp: (1748348154, 1) base write gen: 578"*** ***"t":***"$date":"2025-05-27T12:16:04.463+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348164,"ts_usec":463409,"thread":"15:0x7f6e80898640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 15 milliseconds"*** ***"t":***"$date":"2025-05-27T12:16:04.463+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348164,"ts_usec":463643,"thread":"15:0x7f6e80898640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 18ms, including 2ms for the rollback to stable, and 15ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T12:16:04.485+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":40*** ***"t":***"$date":"2025-05-27T12:16:04.485+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T12:16:04.485+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T12:16:04.485+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T12:16:04.573+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T12:16:04.573+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"0 ms","Time spent in quiesce mode":"15015 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"1 ms","Shut down WaitForMajorityService":"1 ms","Shut down the logical session cache":"0 ms","Shut down the Query Analysis Sampler":"0 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"1 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"0 ms","Shut down replication":"0 ms","Shut down external state":"998 ms","Shut down replication executor":"0 ms","Join replication executor":"1 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"1 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the logical time validator":"1 ms","Shut down the migration util executor":"0 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"42 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"88 ms","shutdownTask total elapsed time":"16150 ms"*** ***"t":***"$date":"2025-05-27T12:16:04.574+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** ==> /data/mongodb/logs/mongodb.log.2025-05-27T12-18-44 <== ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"CONTROL", "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"INDEX", "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"INDEX", "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"CONTROL", "id":6278511, "ctx":"SignalHandler","msg":"Shutting down the Change Stream Expired Pre-images Remover"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"QUERY", "id":6278515, "ctx":"SignalHandler","msg":"Shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"QUERY", "id":6278516, "ctx":"SignalHandler","msg":"Finished shutting down Change Stream Expired Pre-images Remover thread"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"CONTROL", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"CONTROL", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"*** ***"t":***"$date":"2025-05-27T12:18:42.577+00:00"***,"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"*** ***"t":***"$date":"2025-05-27T12:18:42.578+00:00"***,"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"*** ***"t":***"$date":"2025-05-27T12:18:42.578+00:00"***,"s":"I", "c":"STORAGE", "id":22372, "ctx":"OplogVisibilityThread","msg":"Oplog visibility thread shutting down."*** ***"t":***"$date":"2025-05-27T12:18:42.578+00:00"***,"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"*** ***"t":***"$date":"2025-05-27T12:18:42.578+00:00"***,"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:18:42.578+00:00"***,"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"*** ***"t":***"$date":"2025-05-27T12:18:42.579+00:00"***,"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":***"closeConfig":"leak_memory=true,"*** ***"t":***"$date":"2025-05-27T12:18:42.582+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348322,"ts_usec":582253,"thread":"14:0x7f4a11c63640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown rollback to stable has successfully finished and ran for 2 milliseconds"*** ***"t":***"$date":"2025-05-27T12:18:42.584+00:00"***,"s":"I", "c":"WTCHKPT", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348322,"ts_usec":584215,"thread":"14:0x7f4a11c63640","session_name":"close_ckpt","category":"WT_VERB_CHECKPOINT_PROGRESS","category_id":6,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"saving checkpoint snapshot min: 64, snapshot max: 64 snapshot count: 0, oldest timestamp: (1748348016, 1) , meta checkpoint timestamp: (1748348316, 1) base write gen: 597"*** ***"t":***"$date":"2025-05-27T12:18:42.598+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348322,"ts_usec":598937,"thread":"14:0x7f4a11c63640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown checkpoint has successfully finished and ran for 16 milliseconds"*** ***"t":***"$date":"2025-05-27T12:18:42.599+00:00"***,"s":"I", "c":"WTRECOV", "id":22430, "ctx":"SignalHandler","msg":"WiredTiger message","attr":***"message":***"ts_sec":1748348322,"ts_usec":599259,"thread":"14:0x7f4a11c63640","session_name":"WT_CONNECTION.close","category":"WT_VERB_RECOVERY_PROGRESS","category_id":30,"verbose_level":"DEBUG_1","verbose_level_id":1,"msg":"shutdown was completed successfully and took 19ms, including 2ms for the rollback to stable, and 16ms for the checkpoint."*** ***"t":***"$date":"2025-05-27T12:18:42.606+00:00"***,"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":***"durationMillis":27*** ***"t":***"$date":"2025-05-27T12:18:42.606+00:00"***,"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."*** ***"t":***"$date":"2025-05-27T12:18:42.606+00:00"***,"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"*** ***"t":***"$date":"2025-05-27T12:18:42.606+00:00"***,"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"*** ***"t":***"$date":"2025-05-27T12:18:42.676+00:00"***,"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"*** ***"t":***"$date":"2025-05-27T12:18:42.676+00:00"***,"s":"I", "c":"CONTROL", "id":8423404, "ctx":"SignalHandler","msg":"mongod shutdown complete","attr":***"Summary of time elapsed":***"Statistics":***"Enter terminal shutdown":"0 ms","Step down the replication coordinator for shutdown":"1 ms","Time spent in quiesce mode":"15012 ms","Shut down FLE Crud subsystem":"0 ms","Shut down MirrorMaestro":"1 ms","Shut down WaitForMajorityService":"0 ms","Shut down the logical session cache":"1 ms","Shut down the Query Analysis Sampler":"0 ms","Shut down the transport layer":"0 ms","Shut down the global connection pool":"1 ms","Shut down the flow control ticket holder":"0 ms","Shut down the replica set node executor":"0 ms","Shut down the replica set aware services":"1 ms","Shut down replication":"0 ms","Shut down external state":"525 ms","Shut down replication executor":"1 ms","Join replication executor":"1 ms","Kill all operations for shutdown":"0 ms","Shut down all tenant migration access blockers on global shutdown":"1 ms","Shut down all open transactions":"0 ms","Acquire the RSTL for shutdown":"0 ms","Shut down the IndexBuildsCoordinator and wait for index builds to finish":"0 ms","Shut down the replica set monitor":"0 ms","Shut down the logical time validator":"0 ms","Shut down the migration util executor":"1 ms","Shut down the health log":"0 ms","Shut down the TTL monitor":"0 ms","Shut down expired pre-images and documents removers":"0 ms","Shut down the storage engine":"29 ms","Wait for the oplog cap maintainer thread to stop":"0 ms","Shut down full-time data capture":"70 ms","shutdownTask total elapsed time":"15645 ms"*** ***"t":***"$date":"2025-05-27T12:18:42.676+00:00"***,"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":***"exitCode":0*** delete cluster mongodb-mvdokf `kbcli cluster delete mongodb-mvdokf --auto-approve --namespace ns-jcrws ` Cluster mongodb-mvdokf deleted pod_info:mongodb-mvdokf-mongodb-0 2/2 Terminating 6 (6m49s ago) 30m mongodb-mvdokf-mongodb-1 2/2 Terminating 6 (7m6s ago) 33m mongodb-mvdokf-mongodb-2 2/2 Terminating 18 (6m32s ago) 80m No resources found in ns-jcrws namespace. delete cluster pod done No resources found in ns-jcrws namespace. check cluster resource non-exist OK: pvc No resources found in ns-jcrws namespace. delete cluster done No resources found in ns-jcrws namespace. No resources found in ns-jcrws namespace. No resources found in ns-jcrws namespace. Mongodb Test Suite All Done! --------------------------------------Mongodb (Topology = replicaset Replicas 3) Test Result-------------------------------------- [PASSED]|[Create]|[ComponentDefinition=mongodb-1.0.0-alpha.0;ComponentVersion=mongodb;ServiceVersion=7.0.12;]|[Description=Create a cluster with the specified component definition mongodb-1.0.0-alpha.0 and component version mongodb and service version 7.0.12] [PASSED]|[Connect]|[ComponentName=mongodb]|[Description=Connect to the cluster] [PASSED]|[AddData]|[Values=sxxpu]|[Description=Add data to the cluster] [PASSED]|[CheckAddDataReadonly]|[Values=sxxpu;Role=Readonly]|[Description=Add data to the cluster readonly] [PASSED]|[Failover]|[HA=Evicting Pod;ComponentName=mongodb]|[Description=Simulates conditions where pods evicting either due to node drained thereby testing the application's resilience to unavailability of some replicas due to evicting.] [PASSED]|[VerticalScaling]|[ComponentName=mongodb]|[Description=VerticalScaling the cluster specify component mongodb] [PASSED]|[Connect]|[Endpoints=true]|[Description=Connect to the cluster] [WARNING]|[Operation]|[Succeed Or Failed Soon]|[Description=-] [PASSED]|[HscaleOfflineInstances]|[ComponentName=mongodb]|[Description=Hscale the cluster instances offline specify component mongodb] [PASSED]|[HscaleOnlineInstances]|[ComponentName=mongodb]|[Description=Hscale the cluster instances online specify component mongodb] [PASSED]|[Stop]|[-]|[Description=Stop the cluster] [PASSED]|[Start]|[-]|[Description=Start the cluster] [PASSED]|[HorizontalScaling Out]|[ComponentName=mongodb]|[Description=HorizontalScaling Out the cluster specify component mongodb] [PASSED]|[HorizontalScaling In]|[ComponentName=mongodb]|[Description=HorizontalScaling In the cluster specify component mongodb] [PASSED]|[Restart]|[-]|[Description=Restart the cluster] [PASSED]|[Failover]|[HA=DNS Random;Durations=2m;ComponentName=mongodb]|[Description=Simulates conditions where pods experience random IP addresses being returned by the DNS service for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to the DNS service returning random IP addresses.] [PASSED]|[Failover]|[HA=Network Bandwidth Failover;Durations=2m;ComponentName=mongodb]|[Description=] [PASSED]|[VolumeExpansion]|[ComponentName=mongodb]|[Description=VolumeExpansion the cluster specify component mongodb] [PASSED]|[Failover]|[HA=Full CPU;Durations=2m;ComponentName=mongodb]|[Description=Simulates conditions where pods experience CPU full either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high CPU load.] [PASSED]|[Failover]|[HA=DNS Error;Durations=2m;ComponentName=mongodb]|[Description=Simulates conditions where pods experience DNS service errors for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to DNS service errors.] [PASSED]|[Failover]|[HA=Time Offset;Durations=2m;ComponentName=mongodb]|[Description=Simulates a time offset scenario thereby testing the application's resilience to potential slowness/unavailability of some replicas due to time offset.] [PASSED]|[Failover]|[HA=Network Corrupt Failover;Durations=2m;ComponentName=mongodb]|[Description=] [PASSED]|[Update]|[Monitor=true]|[Description=Update the cluster Monitor enable] [PASSED]|[Failover]|[HA=Network Loss Failover;Durations=2m;ComponentName=mongodb]|[Description=] [PASSED]|[Failover]|[HA=Kill 1;ComponentName=mongodb]|[Description=Simulates conditions where process 1 killed either due to expected/undesired processes thereby testing the application's resilience to unavailability of some replicas due to abnormal termination signals.] [PASSED]|[Failover]|[HA=Network Partition;Durations=2m;ComponentName=mongodb]|[Description=Simulates network partition fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to partition network.] [PASSED]|[Failover]|[HA=Network Delay;Durations=2m;ComponentName=mongodb]|[Description=Simulates network delay fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to delay network.] [PASSED]|[SwitchOver]|[ComponentName=mongodb]|[Description=SwitchOver the cluster specify component mongodb] [PASSED]|[Failover]|[HA=OOM;Durations=2m;ComponentName=mongodb]|[Description=Simulates conditions where pods experience OOM either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to high Memory load.] [PASSED]|[RebuildInstance]|[ComponentName=mongodb]|[Description=Rebuild the cluster instance specify component mongodb] [PASSED]|[Failover]|[HA=Delete Pod;ComponentName=mongodb]|[Description=Simulates conditions where pods terminating forced/graceful thereby testing deployment sanity (replica availability & uninterrupted service) and recovery workflow of the application.] [PASSED]|[Failover]|[HA=Pod Failure;Durations=2m;ComponentName=mongodb]|[Description=Simulates conditions where pods experience failure for a period of time either due to expected/undesired processes thereby testing the application's resilience to potential slowness/unavailability of some replicas due to failure.] [PASSED]|[Failover]|[HA=Network Duplicate;Durations=2m;ComponentName=mongodb]|[Description=Simulates network duplicate fault thereby testing the application's resilience to potential slowness/unavailability of some replicas due to duplicate network.] [PASSED]|[Upgrade]|[ComponentName=mongodb;ComponentVersionFrom=7.0.12;ComponentVersionTo=7.0.19]|[Description=Upgrade the cluster specify component mongodb service version from 7.0.12 to 7.0.19] [PASSED]|[Upgrade]|[ComponentName=mongodb;ComponentVersionFrom=7.0.19;ComponentVersionTo=7.0.12]|[Description=Upgrade the cluster specify component mongodb service version from 7.0.19 to 7.0.12] [PASSED]|[Upgrade]|[ComponentName=mongodb;ComponentVersionFrom=7.0.12;ComponentVersionTo=7.0.16]|[Description=Upgrade the cluster specify component mongodb service version from 7.0.12 to 7.0.16] [PASSED]|[Upgrade]|[ComponentName=mongodb;ComponentVersionFrom=7.0.16;ComponentVersionTo=7.0.19]|[Description=Upgrade the cluster specify component mongodb service version from 7.0.16 to 7.0.19] [PASSED]|[Upgrade]|[ComponentName=mongodb;ComponentVersionFrom=7.0.19;ComponentVersionTo=7.0.16]|[Description=Upgrade the cluster specify component mongodb service version from 7.0.19 to 7.0.16] [PASSED]|[Upgrade]|[ComponentName=mongodb;ComponentVersionFrom=7.0.16;ComponentVersionTo=7.0.12]|[Description=Upgrade the cluster specify component mongodb service version from 7.0.16 to 7.0.12] [PASSED]|[Update]|[TerminationPolicy=WipeOut]|[Description=Update the cluster TerminationPolicy WipeOut] [PASSED]|[Backup]|[BackupMethod=dump]|[Description=The cluster dump Backup] [PASSED]|[Restore]|[BackupMethod=dump]|[Description=The cluster dump Restore] [PASSED]|[Check Data]|[BackupMethod=dump]|[Description=Check the cluster data restore via dump] [PASSED]|[Connect]|[ComponentName=mongodb]|[Description=Connect to the cluster] [PASSED]|[Delete Restore Cluster]|[BackupMethod=dump]|[Description=Delete the dump restore cluster] [PASSED]|[Delete]|[-]|[Description=Delete the cluster] [END]