Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 2142523 2018-02-02 14:53:56 2018-02-02 14:54:09 2018-02-02 15:22:09 0:28:00 0:09:43 0:18:17 smithi master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/scrub_test.yaml} 2
Failure Reason:

Command failed on smithi184 with status 1: "sudo cp /tmp/tmpy4Vi8u '/var/lib/ceph/osd/ceph-1/fuse/1.7_head/all/#1:e01220a0:::benchmark_data_smithi184_28663_object437:head#/data'"

fail 2142524 2018-02-02 14:53:56 2018-02-02 14:54:10 2018-02-02 15:14:09 0:19:59 0:11:51 0:08:08 smithi master rados/objectstore/ceph_objectstore_tool.yaml 1
pass 2142525 2018-02-02 14:53:57 2018-02-02 14:54:10 2018-02-02 15:34:10 0:40:00 0:21:09 0:18:51 smithi master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
fail 2142526 2018-02-02 14:53:58 2018-02-02 14:55:54 2018-02-02 15:13:54 0:18:00 0:07:05 0:10:55 smithi master rados/objectstore/filestore-idempotent-aio-journal.yaml 1
Failure Reason:

./run_seed_to_range.sh errored out

fail 2142527 2018-02-02 14:53:59 2018-02-02 14:55:54 2018-02-02 15:13:54 0:18:00 0:07:00 0:11:00 smithi master rados/objectstore/filestore-idempotent.yaml 1
Failure Reason:

./run_seed_to_range.sh errored out

fail 2142528 2018-02-02 14:53:59 2018-02-02 14:56:07 2018-02-02 15:12:06 0:15:59 0:05:31 0:10:28 smithi master rados/objectstore/fusestore.yaml 1
Failure Reason:

Command failed (workunit test objectstore/test_fuse.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-sage-testing-2018-02-01-1734 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/objectstore/test_fuse.sh'

fail 2142529 2018-02-02 14:54:00 2018-02-02 14:56:08 2018-02-02 15:46:09 0:50:01 0:40:26 0:09:35 smithi master rados/singleton/{all/recovery-unfound-found.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds

pass 2142530 2018-02-02 14:54:01 2018-02-02 14:56:10 2018-02-02 15:42:11 0:46:01 0:30:46 0:15:15 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2