Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi152.front.sepia.ceph.com smithi True True 2024-05-12 08:56:31.791845 scheduled_teuthology@teuthology x86_64 /home/teuthworker/archive/teuthology-2024-05-02_22:32:02-powercycle-reef-distro-default-smithi/7686762
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7703220 2024-05-12 05:24:31 2024-05-12 07:32:55 2024-05-12 08:14:55 0:42:00 0:32:12 0:09:48 smithi main rhel 8.6 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/{0-install test/cfuse_workunit_suites_blogbench}} 3
pass 7703194 2024-05-12 05:18:19 2024-05-12 07:08:14 2024-05-12 07:35:33 0:27:19 0:17:28 0:09:51 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/centos_latest tasks/{0-install test/cfuse_workunit_suites_blogbench}} 3
pass 7703138 2024-05-12 05:17:21 2024-05-12 06:18:43 2024-05-12 07:08:17 0:49:34 0:29:23 0:20:11 smithi main centos 8.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/centos_8 tasks/{0-install test/rados_workunit_loadgen_mix}} 3
pass 7703095 2024-05-12 05:09:07 2024-05-12 05:47:00 2024-05-12 06:18:49 0:31:49 0:18:36 0:13:13 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/rgw_s3tests}} 3
pass 7703067 2024-05-12 05:02:50 2024-05-12 05:26:39 2024-05-12 05:49:28 0:22:49 0:12:02 0:10:47 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/rbd_workunit_suites_iozone}} 3
pass 7703049 2024-05-12 05:02:32 2024-05-12 05:06:29 2024-05-12 05:28:07 0:21:38 0:11:40 0:09:58 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/cfuse_workunit_suites_pjd}} 3
pass 7702951 2024-05-11 21:49:25 2024-05-12 02:04:35 2024-05-12 02:34:26 0:29:51 0:14:50 0:15:01 smithi main centos 8.stream krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/legacy$/{legacy-rxbounce} msgr-failures/many tasks/rbd_image_read} 3
dead 7702479 2024-05-11 09:20:36 2024-05-11 13:57:35 2024-05-12 02:06:30 12:08:55 smithi main ubuntu 22.04 rgw/notifications/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides tasks/others/{0-install supported-distros/{ubuntu_latest} test_others}} 2
Failure Reason:

hit max job timeout

fail 7702404 2024-05-11 06:03:07 2024-05-11 12:57:33 2024-05-11 13:51:33 0:54:00 0:42:04 0:11:56 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-05-11T13:40:00.000368+0000 mon.smithi138 (mon.0) 524 : cluster [WRN] osd.5 (root=default,host=smithi152) is down" in cluster log

pass 7702354 2024-05-11 06:02:08 2024-05-11 11:49:32 2024-05-11 12:58:11 1:08:39 0:57:48 0:10:51 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
pass 7702340 2024-05-11 06:01:51 2024-05-11 11:28:53 2024-05-11 11:50:04 0:21:11 0:10:05 0:11:06 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_trivial_sync}} 2
pass 7702311 2024-05-11 06:01:17 2024-05-11 10:55:36 2024-05-11 11:28:45 0:33:09 0:20:56 0:12:13 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_fsstress}} 2
pass 7702279 2024-05-11 06:00:39 2024-05-11 10:11:06 2024-05-11 10:55:56 0:44:50 0:33:38 0:11:12 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/suites/fsstress}} 3
pass 7702233 2024-05-11 05:59:44 2024-05-11 09:03:56 2024-05-11 10:11:17 1:07:21 0:56:32 0:10:49 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone}} 2
pass 7702184 2024-05-11 05:58:44 2024-05-11 08:02:17 2024-05-11 09:05:28 1:03:11 0:52:07 0:11:04 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host overrides/{ignorelist_health pg_health}} 2
fail 7702099 2024-05-11 03:09:52 2024-05-11 04:41:19 2024-05-11 07:48:43 3:07:24 2:56:53 0:10:31 smithi main centos 9.stream upgrade/reef-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/snaps-few-objects 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
Failure Reason:

"2024-05-11T05:12:13.590663+0000 mon.a (mon.0) 581 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7701924 2024-05-10 21:40:59 2024-05-10 23:00:51 2024-05-10 23:36:58 0:36:07 0:23:01 0:13:06 smithi main ubuntu 22.04 rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_kv 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability s3tests-branch ubuntu_latest} 1
fail 7701850 2024-05-10 21:11:17 2024-05-10 22:29:42 2024-05-10 22:49:05 0:19:23 0:07:03 0:12:20 smithi main centos 9.stream orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi063 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:f72fecff68e1d400c4568684327c900485c20d6a pull'

fail 7701782 2024-05-10 21:10:08 2024-05-10 21:58:42 2024-05-10 22:19:41 0:20:59 0:09:25 0:11:34 smithi main ubuntu 22.04 orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_host_drain} 3
Failure Reason:

Command failed on smithi063 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:f72fecff68e1d400c4568684327c900485c20d6a pull'

pass 7701612 2024-05-10 18:38:06 2024-05-10 18:51:48 2024-05-10 21:52:34 3:00:46 2:39:33 0:21:13 smithi main rhel 8.6 fs:functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/admin} 2