User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
sage | 2018-02-16 18:41:28 | 2018-02-16 18:44:01 | 2018-02-16 20:19:53 | 1:35:52 | rados | wip-sage-testing-2018-02-16-0837 | smithi | f4bbaff | 3 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 2193826 | 2018-02-16 18:42:16 | 2018-02-16 18:44:01 | 2018-02-16 20:19:53 | 1:35:52 | 1:24:19 | 0:11:33 | smithi | master | rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-mimic.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} thrashosds-health.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on smithi012 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-config TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh' |
||||||||||||||
pass | 2193827 | 2018-02-16 18:42:18 | 2018-02-16 18:45:58 | 2018-02-16 19:17:51 | 0:31:53 | 0:24:09 | 0:07:44 | smithi | master | rados/standalone/erasure-code.yaml | 1 | |||
pass | 2193828 | 2018-02-16 18:42:20 | 2018-02-16 18:45:59 | 2018-02-16 19:09:50 | 0:23:51 | 0:17:25 | 0:06:26 | smithi | master | centos | rados/singleton-nomsgr/{all/valgrind-leaks.yaml rados.yaml} | 1 | ||
fail | 2193829 | 2018-02-16 18:42:22 | 2018-02-16 18:45:56 | 2018-02-16 19:21:51 | 0:35:55 | 0:27:29 | 0:08:26 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 2193830 | 2018-02-16 18:42:23 | 2018-02-16 18:45:56 | 2018-02-16 19:34:01 | 0:48:05 | 0:40:45 | 0:07:20 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
"2018-02-16 19:29:12.940635 mon.b mon.0 172.21.15.10:6789/0 2448 : cluster [WRN] Manager daemon y is unresponsive. No standby daemons available." in cluster log |
||||||||||||||
pass | 2193831 | 2018-02-16 18:42:25 | 2018-02-16 18:46:18 | 2018-02-16 19:04:12 | 0:17:54 | 0:09:52 | 0:08:02 | smithi | master | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 |