Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 2193144 2018-02-16 14:39:03 2018-02-16 14:39:10 2018-02-16 14:55:10 0:16:00 0:07:50 0:08:10 smithi master rados/standalone/crush.yaml 1
fail 2193145 2018-02-16 14:39:03 2018-02-16 14:39:11 2018-02-16 16:18:32 1:39:21 1:27:15 0:12:06 smithi master rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-mimic.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} thrashosds-health.yaml} 3
Failure Reason:

Command failed (workunit test rbd/test_librbd_python.sh) on smithi017 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage3-testing-2018-02-12-2158 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh'

fail 2193146 2018-02-16 14:39:04 2018-02-16 14:39:11 2018-02-16 15:13:11 0:34:00 0:23:59 0:10:01 smithi master rados/standalone/erasure-code.yaml 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-code.sh) on smithi205 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage3-testing-2018-02-12-2158 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code.sh'

pass 2193147 2018-02-16 14:39:05 2018-02-16 14:39:11 2018-02-16 14:59:10 0:19:59 0:12:32 0:07:27 smithi master rados/standalone/misc.yaml 1
pass 2193148 2018-02-16 14:39:06 2018-02-16 14:39:11 2018-02-16 15:01:11 0:22:00 0:12:46 0:09:14 smithi master rados/standalone/mon.yaml 1
pass 2193149 2018-02-16 14:39:06 2018-02-16 14:39:11 2018-02-16 15:15:11 0:36:00 0:25:14 0:10:46 smithi master rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/rados_api_tests.yaml} 2
pass 2193150 2018-02-16 14:39:07 2018-02-16 14:39:11 2018-02-16 15:41:17 1:02:06 0:53:16 0:08:50 smithi master rados/standalone/osd.yaml 1
pass 2193151 2018-02-16 14:39:08 2018-02-16 14:39:12 2018-02-16 15:23:15 0:44:03 0:34:09 0:09:54 smithi master rados/standalone/scrub.yaml 1
fail 2193152 2018-02-16 14:39:09 2018-02-16 14:39:11 2018-02-16 15:17:11 0:38:00 0:29:36 0:08:24 smithi master centos rados/singleton-nomsgr/{all/valgrind-leaks.yaml rados.yaml} 1
Failure Reason:

"2018-02-16 15:03:00.628300 mon.a mon.0 172.21.15.60:6789/0 49 : cluster [WRN] Manager daemon x is unresponsive. No standby daemons available." in cluster log

fail 2193153 2018-02-16 14:39:09 2018-02-16 14:39:11 2018-02-16 15:21:13 0:42:02 0:30:52 0:11:10 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
Failure Reason:

"2018-02-16 15:06:08.938078 mon.b mon.0 172.21.15.90:6789/0 134 : cluster [WRN] Manager daemon y is unresponsive. No standby daemons available." in cluster log

fail 2193154 2018-02-16 14:39:10 2018-02-16 14:39:11 2018-02-16 15:39:21 1:00:10 0:36:15 0:23:55 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

"2018-02-16 15:06:30.236762 mon.a mon.0 172.21.15.51:6789/0 107 : cluster [WRN] Manager daemon y is unresponsive. No standby daemons available." in cluster log

fail 2193155 2018-02-16 14:39:11 2018-02-16 14:39:12 2018-02-16 14:57:12 0:18:00 0:09:10 0:08:50 smithi master rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi141 with status 234: 'sudo -u ceph CEPH_ARGS=--no-mon-config ceph-monstore-tool /var/lib/ceph/mon/ceph-a rebuild -- --keyring /etc/ceph/ceph.keyring'