Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1272162 2017-06-09 00:49:40 2017-06-09 00:49:53 2017-06-09 02:47:54 1:58:01 1:54:13 0:03:48 smithi master rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-luminous.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml}} 3
Failure Reason:

'/home/ubuntu/cephtest/archive/syslog/misc.log:2017-06-09T00:53:34.542776+00:00 smithi083 ceph-create-keys[445381]: INFO:ceph-create-keys:ceph-mon admin socket not ready yet. ' in syslog

pass 1272163 2017-06-09 00:49:41 2017-06-09 00:49:54 2017-06-09 01:19:54 0:30:00 0:26:47 0:03:13 smithi master rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/cache-pool-snaps.yaml} 2
pass 1272164 2017-06-09 00:49:42 2017-06-09 00:49:54 2017-06-09 01:19:53 0:29:59 0:29:24 0:00:35 smithi master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/cache-pool-snaps.yaml} 2
pass 1272165 2017-06-09 00:49:42 2017-06-09 00:49:54 2017-06-09 01:43:54 0:54:00 0:52:50 0:01:10 smithi master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
fail 1272166 2017-06-09 00:49:43 2017-06-09 00:49:54 2017-06-09 01:15:54 0:26:00 0:22:46 0:03:14 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/none.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
Failure Reason:

saw valgrind issues

fail 1272167 2017-06-09 00:49:44 2017-06-09 00:49:54 2017-06-09 01:37:54 0:48:00 0:44:01 0:03:59 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/none.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

saw valgrind issues

fail 1272168 2017-06-09 00:49:44 2017-06-09 00:49:54 2017-06-09 01:15:54 0:26:00 0:22:21 0:03:39 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/none.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
Failure Reason:

saw valgrind issues

pass 1272169 2017-06-09 00:49:45 2017-06-09 00:49:54 2017-06-09 04:53:58 4:04:04 4:03:00 0:01:04 smithi master rados/objectstore/filestore-idempotent-aio-journal.yaml 1
pass 1272170 2017-06-09 00:49:46 2017-06-09 00:49:54 2017-06-09 05:11:59 4:22:05 4:21:09 0:00:56 smithi master rados/objectstore/filestore-idempotent.yaml 1
fail 1272171 2017-06-09 00:49:46 2017-06-09 00:49:53 2017-06-09 01:01:53 0:12:00 0:08:21 0:03:39 smithi master rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml} 1
pass 1272172 2017-06-09 00:49:47 2017-06-09 00:49:54 2017-06-09 01:19:54 0:30:00 0:29:09 0:00:51 smithi master ubuntu 14.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml supported/ubuntu_14.04.yaml thrashers/default.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 1272173 2017-06-09 00:49:48 2017-06-09 00:49:54 2017-06-09 01:05:53 0:15:59 0:10:53 0:05:06 smithi master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi074 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-yuri-testing_2017_7_9_2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 1272174 2017-06-09 00:49:48 2017-06-09 00:49:54 2017-06-09 01:05:54 0:16:00 0:11:54 0:04:06 smithi master centos rados/singleton-nomsgr/{all/valgrind-leaks.yaml rados.yaml} 1
Failure Reason:

Command failed on smithi024 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

pass 1272175 2017-06-09 00:49:49 2017-06-09 00:49:54 2017-06-09 07:30:01 6:40:07 6:39:41 0:00:26 smithi master rados/objectstore/objectstore.yaml 1
fail 1272176 2017-06-09 00:49:50 2017-06-09 00:49:54 2017-06-09 01:03:54 0:14:00 0:10:19 0:03:41 smithi master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-yuri-testing_2017_7_9_2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'