Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6632572 2022-01-21 17:36:31 2022-01-21 17:37:36 2022-01-21 18:04:45 0:27:09 0:17:21 0:09:48 smithi master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} 2
Failure Reason:

"2022-01-21T17:56:35.327791+0000 mon.a (mon.0) 434 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 6632573 2022-01-21 17:36:32 2022-01-21 17:37:36 2022-01-21 18:03:01 0:25:25 0:14:16 0:11:09 smithi master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/strays} 2
Failure Reason:

Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)

fail 6632574 2022-01-21 17:36:33 2022-01-21 17:38:07 2022-01-21 17:55:22 0:17:15 0:08:58 0:08:17 smithi master centos 8.2 fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/centos_latest} 2
Failure Reason:

Command failed on smithi086 with status 5: 'sudo systemctl stop ceph-2e754fd4-7ae3-11ec-8c35-001a4aab830c@mon.smithi086'

fail 6632575 2022-01-21 17:36:34 2022-01-21 17:38:17 2022-01-21 17:59:50 0:21:33 0:10:22 0:11:11 smithi master ubuntu 18.04 fs/upgrade/volumes/import-legacy/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 2
Failure Reason:

Command failed on smithi114 with status 1: "sudo nsenter --net=/var/run/netns/ceph-ns-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' --id vol_data_isolated --client_mountpoint=/volumes/_nogroup/vol_isolated mnt.0"

fail 6632576 2022-01-21 17:36:36 2022-01-21 17:39:58 2022-01-21 18:08:15 0:28:17 0:11:55 0:16:22 smithi master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-readahead} 2
Failure Reason:

Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)

pass 6632577 2022-01-21 17:36:37 2022-01-21 17:40:38 2022-01-21 18:16:51 0:36:13 0:19:07 0:17:06 smithi master ubuntu 20.04 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} 2