Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi114.front.sepia.ceph.com smithi True True 2024-04-25 12:01:38.945239 scheduled_leonidus@teuthology centos 9 x86_64 /home/teuthworker/archive/leonidus-2024-04-25_11:13:36-fs-wip-lusov-quiescer-distro-default-smithi/7673360
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7673360 2024-04-25 11:14:49 2024-04-25 12:01:38 2024-04-25 12:36:40 0:36:01 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/direct_io}} 3
fail 7673133 2024-04-25 10:02:02 2024-04-25 11:05:57 2024-04-25 11:49:41 0:43:44 0:36:22 0:07:22 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7673104 2024-04-25 10:01:32 2024-04-25 10:41:02 2024-04-25 11:05:57 0:24:55 0:13:27 0:11:28 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/backtrace} 2
pass 7673064 2024-04-25 10:00:50 2024-04-25 10:08:19 2024-04-25 10:41:46 0:33:27 0:26:32 0:06:55 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsync-tester}} 3
pass 7672901 2024-04-25 06:28:33 2024-04-25 06:35:57 2024-04-25 09:31:37 2:55:40 2:47:03 0:08:37 smithi main ubuntu 22.04 rados:standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} 1
pass 7672847 2024-04-25 05:02:10 2024-04-25 05:33:33 2024-04-25 06:36:07 1:02:34 0:51:43 0:10:51 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/rados_bench}} 3
fail 7672815 2024-04-25 03:55:22 2024-04-25 05:01:51 2024-04-25 05:22:20 0:20:29 0:13:11 0:07:18 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"2024-04-25T05:19:18.473934+0000 mon.smithi111 (mon.0) 846 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) ["Failed while placing nfs.foo.0.0.smithi111.qvpyfv on smithi111: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\nterminate called after throwing an instance of 'std::bad_variant_access'\n what(): std::get: wrong index for variant\n"] (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

fail 7672755 2024-04-25 03:54:21 2024-04-25 04:16:06 2024-04-25 04:51:44 0:35:38 0:23:54 0:11:44 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-04-25T04:47:11.248588+0000 mon.a (mon.0) 1129 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log

pass 7672451 2024-04-24 21:49:09 2024-04-25 03:07:29 2024-04-25 03:41:32 0:34:03 0:19:29 0:14:34 smithi main centos 8.stream krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/crc$/{crc-rxbounce} msgr-failures/many tasks/rbd_map_unmap} 3
pass 7672420 2024-04-24 21:48:38 2024-04-25 02:31:52 2024-04-25 03:07:45 0:35:53 0:19:21 0:16:32 smithi main centos 8.stream krbd/fsx/{ceph/ceph clusters/3-node conf features/object-map ms_mode$/{crc} objectstore/bluestore-bitmap striping/fancy/{msgr-failures/few randomized-striping-on} tasks/fsx-3-client} 3
pass 7672379 2024-04-24 21:28:43 2024-04-25 02:00:07 2024-04-25 02:32:01 0:31:54 0:19:47 0:12:07 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 7672335 2024-04-24 21:27:58 2024-04-25 01:25:14 2024-04-25 02:02:21 0:37:07 0:22:35 0:14:32 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7672167 2024-04-24 21:25:16 2024-04-24 22:42:57 2024-04-25 01:19:00 2:36:03 2:28:20 0:07:43 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

"2024-04-24T23:50:00.000143+0000 mon.a (mon.0) 1508 : cluster 3 [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)" in cluster log

fail 7671826 2024-04-24 15:50:26 2024-04-24 16:52:33 2024-04-24 17:11:40 0:19:07 0:12:09 0:06:58 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
Failure Reason:

"2024-04-24T17:07:45.782213+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.1 on smithi114 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7671806 2024-04-24 15:50:05 2024-04-24 16:36:44 2024-04-24 16:50:00 0:13:16 0:06:18 0:06:58 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi165 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a3c7838-025a-11ef-bc93-c7b262605968 -- ceph orch apply mon '3;smithi114:172.21.15.114=a;smithi114:[v2:172.21.15.114:3301,v1:172.21.15.114:6790]=c;smithi165:172.21.15.165=b'"

pass 7671736 2024-04-24 15:14:34 2024-04-24 19:11:50 2024-04-24 20:07:16 0:55:26 0:41:16 0:14:10 smithi main centos 8.stream krbd/thrash/{bluestore-bitmap ceph/ceph clusters/fixed-3 conf ms_mode$/{crc} thrashers/upmap thrashosds-health workloads/rbd_fio} 3
pass 7671681 2024-04-24 14:13:37 2024-04-24 18:19:33 2024-04-24 19:12:52 0:53:19 0:46:46 0:06:33 smithi main rhel 8.6 rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{rhel_8} workloads/rbd-mirror-snapshot-workunit-minimum} 2
pass 7671650 2024-04-24 14:13:04 2024-04-24 17:48:07 2024-04-24 18:19:37 0:31:30 0:15:21 0:16:09 smithi main centos 8.stream rbd/singleton-bluestore/{all/issue-20295 conf/{disable-pool-app} objectstore/bluestore-comp-snappy openstack supported-random-distro$/{centos_8}} 4
pass 7671607 2024-04-24 14:12:16 2024-04-24 17:23:27 2024-04-24 17:48:18 0:24:51 0:18:37 0:06:14 smithi main rhel 8.6 rbd/singleton/{all/mon-command-help conf/{disable-pool-app} objectstore/bluestore-comp-snappy openstack supported-random-distro$/{rhel_8}} 1
pass 7671437 2024-04-24 13:00:25 2024-04-24 13:13:32 2024-04-24 14:09:31 0:55:59 0:45:04 0:10:55 smithi main ubuntu 22.04 rgw/verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2