Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 1708882 2017-10-06 14:24:40 2017-10-06 14:26:24 2017-10-06 14:52:24 0:26:00 0:18:55 0:07:05 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados/{op_queue/opclass.yaml rados.yaml} tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
pass 1708883 2017-10-06 14:24:41 2017-10-06 14:27:49 2017-10-06 16:37:51 2:10:02 2:04:38 0:05:24 smithi master rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados/{op_queue/opclass.yaml rados.yaml}} 1
pass 1708884 2017-10-06 14:24:41 2017-10-06 14:28:34 2017-10-06 15:30:35 1:02:01 0:22:42 0:39:19 smithi master rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados/{op_queue/client.yaml rados.yaml}} 2
fail 1708885 2017-10-06 14:24:42 2017-10-06 14:28:36 2017-10-06 15:02:36 0:34:00 0:26:18 0:07:42 smithi master rados/standalone/scrub.yaml 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi195 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eric-dmclock-only TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'