Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 2646871 2018-06-09 19:14:52 2018-06-09 19:15:21 2018-06-09 19:41:20 0:25:59 0:19:43 0:06:16 smithi master centos 7.4 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/cosbench_64K_read_write.yaml} 1
pass 2646872 2018-06-09 19:14:53 2018-06-09 19:15:22 2018-06-09 19:37:21 0:21:59 0:13:33 0:08:26 smithi master centos rados/singleton-flat/valgrind-leaks.yaml 1
fail 2646873 2018-06-09 19:14:55 2018-06-09 19:15:22 2018-06-09 23:49:27 4:34:05 4:22:18 0:11:47 smithi master centos 7.4 rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-mimic.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{centos_latest.yaml} thrashosds-health.yaml} 3
Failure Reason:

timed out waiting for admin_socket to appear after osd.3 restart

pass 2646874 2018-06-09 19:14:56 2018-06-09 19:15:22 2018-06-09 20:05:22 0:50:00 0:24:32 0:25:28 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml two-plus-three.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml msgr/random.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} 3
pass 2646875 2018-06-09 19:14:57 2018-06-09 19:15:26 2018-06-09 19:59:26 0:44:00 0:30:42 0:13:18 smithi master ubuntu 16.04 rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/many.yaml workloads/snaps-few-objects.yaml} 2
fail 2646876 2018-06-09 19:14:58 2018-06-09 19:17:05 2018-06-09 20:03:05 0:46:00 0:37:30 0:08:30 smithi master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/scrub.yaml} 1
Failure Reason:

Command failed (workunit test scrub/osd-unexpected-clone.sh) on smithi179 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-24452 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-unexpected-clone.sh'

fail 2646877 2018-06-09 19:14:59 2018-06-09 19:17:05 2018-06-09 23:03:14 3:46:09 3:35:31 0:10:38 smithi master centos 7.4 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed on smithi121 with status 134: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/ostest && cd $TESTDIR/ostest && ulimit -c 0 && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-filestore 20 --debug-bluestore 20" ceph_test_objectstore --gtest_filter=-*/3\''