Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6252345 2021-07-04 14:38:35 2021-07-04 14:38:36 2021-07-04 15:13:18 0:34:42 0:18:40 0:16:02 smithi master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-full} 2
Failure Reason:

Test failure: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)

pass 6252346 2021-07-04 14:38:36 2021-07-04 14:38:36 2021-07-04 15:05:42 0:27:06 0:16:22 0:10:44 smithi master rhel 8.4 fs/verify/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/fsstress validater/lockdep} 2
pass 6252347 2021-07-04 14:38:37 2021-07-04 14:38:37 2021-07-04 15:30:30 0:51:53 0:41:18 0:10:35 smithi master centos 8.2 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
pass 6252348 2021-07-04 14:38:38 2021-07-04 14:38:38 2021-07-04 15:02:06 0:23:28 0:11:50 0:11:38 smithi master ubuntu 20.04 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
fail 6252350 2021-07-04 14:38:39 2021-07-04 14:38:39 2021-07-04 15:12:20 0:33:41 0:21:05 0:12:36 smithi master ubuntu 20.04 fs/shell/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_ls_H_prints_human_readable_file_size (tasks.cephfs.test_cephfs_shell.TestLS)

fail 6252352 2021-07-04 14:38:40 2021-07-04 14:38:41 2021-07-04 15:00:04 0:21:23 0:09:47 0:11:36 smithi master ubuntu 18.04 fs/upgrade/volumes/import-legacy/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 2
Failure Reason:

Command failed on smithi153 with status 1: "sudo nsenter --net=/var/run/netns/ceph-ns-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' --id vol_data_isolated --client_mountpoint=/volumes/_nogroup/vol_isolated mnt.0"

pass 6252354 2021-07-04 14:38:41 2021-07-04 14:38:42 2021-07-04 15:05:57 0:27:15 0:15:41 0:11:34 smithi master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/alternate-pool} 2
fail 6252355 2021-07-04 14:38:42 2021-07-04 14:38:42 2021-07-04 15:00:06 0:21:24 0:13:16 0:08:08 smithi master rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-readahead} 2
Failure Reason:

Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)

pass 6252356 2021-07-04 14:38:43 2021-07-04 14:38:43 2021-07-04 15:32:01 0:53:18 0:38:13 0:15:05 smithi master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} 2
pass 6252357 2021-07-04 14:38:45 2021-07-04 14:38:45 2021-07-04 15:00:16 0:21:31 0:13:12 0:08:19 smithi master rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-flush} 2
fail 6252358 2021-07-04 14:38:46 2021-07-04 14:38:47 2021-07-04 15:16:27 0:37:40 0:22:28 0:15:12 smithi master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-full} 2
Failure Reason:

Test failure: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)

pass 6252359 2021-07-04 14:38:47 2021-07-04 14:38:47 2021-07-04 15:08:41 0:29:54 0:17:17 0:12:37 smithi master ubuntu 20.04 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/iozone}} 2