Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 4821861 2020-03-03 19:58:41 2020-03-03 19:59:44 2020-03-03 20:23:42 0:23:58 0:13:09 0:10:49 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
fail 4821862 2020-03-03 19:58:42 2020-03-03 19:59:54 2020-03-04 02:42:02 6:42:08 6:30:20 0:11:48 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi081 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=8dbc3e5858eb173f1cb3f90936be7b5a4e16226e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 4821863 2020-03-03 19:58:43 2020-03-03 20:01:14 2020-03-03 20:39:13 0:37:59 0:23:11 0:14:48 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-avl.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} 2
fail 4821864 2020-03-03 19:58:44 2020-03-03 20:01:15 2020-03-03 21:29:16 1:28:01 1:10:21 0:17:40 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
Failure Reason:

saw valgrind issues

pass 4821865 2020-03-03 19:58:45 2020-03-03 20:01:53 2020-03-03 20:23:51 0:21:58 0:12:18 0:09:40 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
fail 4821866 2020-03-03 19:58:46 2020-03-03 20:04:30 2020-03-03 21:02:29 0:57:59 0:47:58 0:10:01 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

"2020-03-03T20:36:16.329564+0000 mon.a (mon.0) 312 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 4821867 2020-03-03 19:58:47 2020-03-03 20:05:15 2020-03-03 20:31:14 0:25:59 0:13:45 0:12:14 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} 2
fail 4821868 2020-03-03 19:58:48 2020-03-03 20:07:14 2020-03-03 20:49:13 0:41:59 0:31:18 0:10:41 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
Failure Reason:

saw valgrind issues

pass 4821869 2020-03-03 19:58:49 2020-03-03 20:07:14 2020-03-03 20:29:13 0:21:59 0:12:34 0:09:25 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-avl.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
fail 4821870 2020-03-03 19:58:50 2020-03-03 20:08:57 2020-03-03 21:16:58 1:08:01 0:46:17 0:21:44 smithi master centos 8.1 rados:verify/{centos_latest.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

"2020-03-03T20:49:11.264600+0000 mon.a (mon.0) 323 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log