Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4782260 2020-02-19 16:45:24 2020-02-20 19:28:48 2020-02-20 20:44:49 1:16:01 0:28:37 0:47:24 smithi master centos 7.5 multimds/basic/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/kclient/{mount.yaml overrides/{distro/random/{k-testing.yaml supported$/{centos_latest.yaml}} ms-die-on-skipped.yaml}} objectstore-ec/filestore-xfs.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cephfs_test_snapshots.yaml} 3
Failure Reason:

"2020-02-20 20:29:54.207586 mon.b (mon.0) 639 : cluster [WRN] Health check failed: Reduced data availability: 16 pgs inactive (PG_AVAILABILITY)" in cluster log

pass 4782261 2020-02-19 16:45:25 2020-02-20 19:30:49 2020-02-20 20:38:49 1:08:00 0:36:54 0:31:06 smithi master rhel 7.6 multimds/basic/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/kclient/{mount.yaml overrides/{distro/rhel/{k-distro.yaml rhel_latest.yaml} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-ec-root.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
pass 4782262 2020-02-19 16:45:26 2020-02-20 19:31:06 2020-02-20 20:25:06 0:54:00 0:30:57 0:23:03 smithi master ubuntu 16.04 multimds/basic/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/kclient/{mount.yaml overrides/{distro/random/{k-testing.yaml supported$/{ubuntu_16.04.yaml}} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-bitmap.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cfuse_workunit_suites_ffsb.yaml} 3
fail 4782263 2020-02-19 16:45:27 2020-02-20 19:32:23 2020-02-20 20:16:22 0:43:59 0:05:16 0:38:43 smithi master centos multimds/verify/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mount/fuse.yaml objectstore-ec/bluestore-bitmap.yaml overrides/{fuse-default-perm-no.yaml verify/{frag_enable.yaml mon-debug.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml}} tasks/cfuse_workunit_suites_fsstress.yaml validater/valgrind.yaml} 3
Failure Reason:

Command failed on smithi187 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse ceph-debuginfo python3-cephfs bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel'

fail 4782264 2020-02-19 16:45:28 2020-02-20 19:32:23 2020-02-20 20:20:23 0:48:00 0:19:07 0:28:53 smithi master rhel 7.6 multimds/basic/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/kclient/{mount.yaml overrides/{distro/rhel/{k-distro.yaml rhel_latest.yaml} ms-die-on-skipped.yaml}} objectstore-ec/filestore-xfs.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cephfs_test_exports.yaml} 3
Failure Reason:

"2020-02-20 20:12:49.902866 mon.b (mon.0) 274 : cluster [WRN] Health check failed: Reduced data availability: 16 pgs inactive (PG_AVAILABILITY)" in cluster log

fail 4782265 2020-02-19 16:45:29 2020-02-20 19:32:23 2020-02-20 20:18:22 0:45:59 0:24:13 0:21:46 smithi master multimds/basic/{begin.yaml clusters/3-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml mount/fuse.yaml objectstore-ec/bluestore-bitmap.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cephfs_test_snapshots.yaml} 3
Failure Reason:

"2020-02-20 20:10:11.028876 mon.b (mon.0) 1073 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log

pass 4782266 2020-02-19 16:45:30 2020-02-20 19:32:30 2020-02-20 20:14:30 0:42:00 0:23:08 0:18:52 smithi master rhel 7.5 multimds/thrash/{begin.yaml ceph-thrash/default.yaml clusters/3-mds-2-standby.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mount/kclient/{mount.yaml overrides/{distro/random/{k-testing.yaml supported$/{rhel_latest.yaml}} ms-die-on-skipped.yaml}} msgr-failures/osd-mds-delay.yaml objectstore-ec/bluestore-bitmap.yaml overrides/{fuse-default-perm-no.yaml thrash/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} thrash_debug.yaml} tasks/cfuse_workunit_suites_fsstress.yaml} 3
dead 4782267 2020-02-19 16:45:31 2020-02-20 19:32:39 2020-02-20 20:14:39 0:42:00 smithi master rhel 7.6 multimds/basic/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/kclient/{mount.yaml overrides/{distro/rhel/{k-distro.yaml rhel_latest.yaml} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cephfs_test_snapshots.yaml}
Failure Reason:

reached maximum tries (60) after waiting for 900 seconds

fail 4782268 2020-02-19 16:45:32 2020-02-20 19:32:40 2020-02-20 19:52:40 0:20:00 0:05:11 0:14:49 smithi master centos multimds/verify/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mount/fuse.yaml objectstore-ec/bluestore-comp.yaml overrides/{fuse-default-perm-no.yaml verify/{frag_enable.yaml mon-debug.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml}} tasks/cfuse_workunit_suites_fsstress.yaml validater/valgrind.yaml} 3
Failure Reason:

Command failed on smithi045 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse ceph-debuginfo python3-cephfs bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel'

fail 4782269 2020-02-19 16:45:33 2020-02-20 19:32:48 2020-02-20 20:38:48 1:06:00 0:29:00 0:37:00 smithi master rhel 7.5 multimds/basic/{begin.yaml clusters/3-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml mount/kclient/{mount.yaml overrides/{distro/random/{k-testing.yaml supported$/{rhel_latest.yaml}} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-comp.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cephfs_test_snapshots.yaml} 3
Failure Reason:

Command failed on smithi074 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 4782270 2020-02-19 16:45:35 2020-02-20 19:36:06 2020-02-20 19:56:06 0:20:00 0:05:09 0:14:51 smithi master centos multimds/verify/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mount/fuse.yaml objectstore-ec/filestore-xfs.yaml overrides/{fuse-default-perm-no.yaml verify/{frag_enable.yaml mon-debug.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml}} tasks/cfuse_workunit_suites_fsstress.yaml validater/valgrind.yaml} 3
Failure Reason:

Command failed on smithi071 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse ceph-debuginfo python3-cephfs bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel bison flex elfutils-libelf-devel openssl-devel'

pass 4782271 2020-02-19 16:45:36 2020-02-20 19:36:13 2020-02-20 20:36:15 1:00:02 0:22:22 0:37:40 smithi master rhel 7.6 multimds/thrash/{begin.yaml ceph-thrash/default.yaml clusters/3-mds-2-standby.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} mount/kclient/{mount.yaml overrides/{distro/rhel/{k-distro.yaml rhel_latest.yaml} ms-die-on-skipped.yaml}} msgr-failures/osd-mds-delay.yaml objectstore-ec/filestore-xfs.yaml overrides/{fuse-default-perm-no.yaml thrash/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} thrash_debug.yaml} tasks/cfuse_workunit_suites_fsstress.yaml} 3
fail 4782272 2020-02-19 16:45:37 2020-02-20 19:36:13 2020-02-20 20:26:13 0:50:00 0:28:09 0:21:51 smithi master multimds/basic/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/fuse.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cephfs_test_snapshots.yaml} 3
Failure Reason:

"2020-02-20 20:17:23.639948 mon.a (mon.0) 1259 : cluster [WRN] Health check failed: Reduced data availability: 16 pgs inactive (PG_AVAILABILITY)" in cluster log

pass 4782273 2020-02-19 16:45:38 2020-02-20 19:38:38 2020-02-20 20:32:45 0:54:07 0:39:44 0:14:23 smithi master rhel 7.6 multimds/basic/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/kclient/{mount.yaml overrides/{distro/rhel/{k-distro.yaml rhel_latest.yaml} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-bitmap.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cfuse_workunit_misc.yaml} 3
fail 4782274 2020-02-19 16:45:39 2020-02-20 19:42:33 2020-02-20 20:08:33 0:26:00 0:15:30 0:10:30 smithi master ubuntu 16.04 multimds/basic/{begin.yaml clusters/9-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml mount/kclient/{mount.yaml overrides/{distro/random/{k-testing.yaml supported$/{ubuntu_16.04.yaml}} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-ec-root.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cephfs_test_exports.yaml} 3
Failure Reason:

"2020-02-20 20:00:29.638330 mon.b (mon.0) 354 : cluster [WRN] Health check failed: Reduced data availability: 16 pgs inactive (PG_AVAILABILITY)" in cluster log

fail 4782275 2020-02-19 16:45:40 2020-02-20 19:42:34 2020-02-20 20:48:35 1:06:01 0:27:42 0:38:19 smithi master rhel 7.6 multimds/basic/{begin.yaml clusters/3-mds.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml mount/kclient/{mount.yaml overrides/{distro/rhel/{k-distro.yaml rhel_latest.yaml} ms-die-on-skipped.yaml}} objectstore-ec/filestore-xfs.yaml overrides/{basic/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cephfs_test_snapshots.yaml} 3
Failure Reason:

Command failed on smithi112 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'