Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 3442932 2019-01-10 16:24:23 2019-01-10 16:39:37 2019-01-10 19:57:40 3:18:03 3:07:15 0:10:48 smithi master rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/bluestore-comp.yaml tasks/workunits.yaml} 2
Failure Reason:

Command failed (workunit test mgr/test_localpool.sh) on smithi117 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=64d99fa3fc22b0a41b8bd9d06133081bf63f445d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mgr/test_localpool.sh'

fail 3442933 2019-01-10 16:24:23 2019-01-10 16:39:56 2019-01-10 17:13:56 0:34:00 0:22:40 0:11:20 smithi master rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml} 2
Failure Reason:

"2019-01-10 16:57:54.402791 osd.5 osd.5 172.21.15.90:6804/15546 17 : cluster [ERR] 2.3 Unexpected Error: recovery ending with 1: {2:ccb17a62:::benchmark_data_smithi090_13056_object12189:head=33'776 flags = none}" in cluster log

fail 3442934 2019-01-10 16:24:24 2019-01-10 16:40:10 2019-01-10 16:58:10 0:18:00 0:07:21 0:10:39 smithi master rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test rados/test_librados_build.sh) on smithi023 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=64d99fa3fc22b0a41b8bd9d06133081bf63f445d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh'

pass 3442935 2019-01-10 16:24:25 2019-01-10 16:40:31 2019-01-10 17:38:31 0:58:00 0:25:28 0:32:32 smithi master rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
fail 3442936 2019-01-10 16:24:26 2019-01-10 16:40:53 2019-01-10 20:16:56 3:36:03 3:08:26 0:27:37 smithi master rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/bluestore-bitmap.yaml tasks/workunits.yaml} 2
Failure Reason:

Command failed (workunit test mgr/test_localpool.sh) on smithi060 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=64d99fa3fc22b0a41b8bd9d06133081bf63f445d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mgr/test_localpool.sh'

pass 3442937 2019-01-10 16:24:26 2019-01-10 16:41:24 2019-01-10 17:51:24 1:10:00 0:38:29 0:31:31 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2