Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6765497 2022-03-28 15:35:23 2022-03-28 16:24:37 2022-03-28 17:21:44 0:57:07 0:41:40 0:15:27 smithi master ubuntu 20.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
dead 6765498 2022-03-28 15:35:24 2022-03-28 16:24:37 2022-03-28 23:13:56 6:49:19 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
Failure Reason:

hit max job timeout

fail 6765499 2022-03-28 15:35:25 2022-03-28 16:24:37 2022-03-28 16:47:53 0:23:16 0:10:55 0:12:21 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} 3
Failure Reason:

Command failed on smithi100 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:14f44febaa74e8f8931e156f1a921292708ad47a -v bootstrap --fsid 75d4a4cc-aeb6-11ec-8c35-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6765500 2022-03-28 15:35:26 2022-03-28 16:24:38 2022-03-28 17:59:11 1:34:33 1:25:32 0:09:01 smithi master centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

fail 6765501 2022-03-28 15:35:27 2022-03-28 16:24:38 2022-03-28 16:50:08 0:25:30 0:11:05 0:14:25 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} 3
Failure Reason:

Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:14f44febaa74e8f8931e156f1a921292708ad47a -v bootstrap --fsid b25f0cb6-aeb6-11ec-8c35-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.18 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6765502 2022-03-28 15:35:28 2022-03-28 16:24:38 2022-03-28 16:51:24 0:26:46 0:10:50 0:15:56 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:14f44febaa74e8f8931e156f1a921292708ad47a -v bootstrap --fsid db343544-aeb6-11ec-8c35-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.27 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

fail 6765503 2022-03-28 15:35:29 2022-03-28 16:24:39 2022-03-28 16:47:14 0:22:35 0:11:02 0:11:33 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
Failure Reason:

Command failed on smithi007 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:14f44febaa74e8f8931e156f1a921292708ad47a -v bootstrap --fsid 5586d384-aeb6-11ec-8c35-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.7 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

pass 6765505 2022-03-28 15:35:30 2022-03-28 16:24:39 2022-03-28 16:59:40 0:35:01 0:26:15 0:08:46 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/strays} 2
pass 6765507 2022-03-28 15:35:31 2022-03-28 16:24:40 2022-03-28 18:08:27 1:43:47 1:34:42 0:09:05 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 6765510 2022-03-28 15:35:32 2022-03-28 16:24:41 2022-03-28 16:46:18 0:21:37 0:11:27 0:10:10 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} 3
Failure Reason:

Command failed on smithi037 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:14f44febaa74e8f8931e156f1a921292708ad47a -v bootstrap --fsid 4c4c1eb4-aeb6-11ec-8c35-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.37 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

dead 6765512 2022-03-28 15:35:33 2022-03-28 16:24:42 2022-03-28 17:46:56 1:22:14 1:11:18 0:10:56 smithi master rhel 8.4 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/fsstress validater/valgrind} 2
Failure Reason:

SSH connection to smithi026 was lost: 'sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0'

pass 6765514 2022-03-28 15:35:34 2022-03-28 16:24:43 2022-03-28 16:59:24 0:34:41 0:21:59 0:12:42 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2