Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 901065 2017-03-10 16:37:49 2017-03-10 16:37:58 2017-03-10 18:35:59 1:58:01 1:52:21 0:05:40 ovh master rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-luminous.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml}} 3
Failure Reason:

Command failed on ovh060 with status 5: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 --cluster ceph user rm --uid foo.client.0 --purge-data'

pass 901066 2017-03-10 16:37:50 2017-03-10 16:37:58 2017-03-10 17:09:58 0:32:00 0:22:19 0:09:41 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/rados_api_tests.yaml} 2
pass 901067 2017-03-10 16:37:50 2017-03-10 16:37:58 2017-03-10 17:29:58 0:52:00 0:42:24 0:09:36 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
pass 901068 2017-03-10 16:37:51 2017-03-10 16:37:59 2017-03-10 17:05:58 0:27:59 0:16:14 0:11:45 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/rados_api_tests.yaml} 2
pass 901069 2017-03-10 16:37:51 2017-03-10 16:37:58 2017-03-10 16:57:57 0:19:59 0:10:13 0:09:46 ovh master rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/admin_socket_objecter_requests.yaml} 2
pass 901070 2017-03-10 16:37:52 2017-03-10 16:37:58 2017-03-10 17:35:58 0:58:00 0:46:57 0:11:03 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/cache-snaps.yaml} 2
pass 901071 2017-03-10 16:37:53 2017-03-10 16:37:58 2017-03-10 17:09:58 0:32:00 0:22:03 0:09:57 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/rados_api_tests.yaml} 2
pass 901072 2017-03-10 16:37:53 2017-03-10 16:37:58 2017-03-10 16:59:58 0:22:00 0:11:19 0:10:41 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/admin_socket_objecter_requests.yaml} 2
pass 901073 2017-03-10 16:37:54 2017-03-10 16:37:59 2017-03-10 17:21:58 0:43:59 0:32:53 0:11:06 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/cache-agent-small.yaml} 2
fail 901074 2017-03-10 16:37:55 2017-03-10 16:37:58 2017-03-10 17:19:58 0:42:00 0:33:47 0:08:13 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

timed out waiting for admin_socket to appear after osd.5 restart

pass 901075 2017-03-10 16:37:56 2017-03-10 16:37:58 2017-03-10 17:23:58 0:46:00 0:36:29 0:09:31 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/cache-snaps.yaml} 2
pass 901076 2017-03-10 16:37:56 2017-03-10 16:37:59 2017-03-10 17:13:58 0:35:59 0:25:07 0:10:52 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/radosbench.yaml} 2
pass 901077 2017-03-10 16:37:57 2017-03-10 16:37:58 2017-03-10 17:13:58 0:36:00 0:26:23 0:09:37 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/rados_api_tests.yaml} 2
pass 901078 2017-03-10 16:37:58 2017-03-10 16:37:59 2017-03-10 16:49:58 0:11:59 0:06:12 0:05:47 ovh master rados/singleton/{all/mon-seesaw.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore.yaml rados.yaml} 1
pass 901079 2017-03-10 16:37:59 2017-03-10 16:38:00 2017-03-10 17:24:00 0:46:00 0:37:54 0:08:06 ovh master centos 7.3 rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore.yaml rados.yaml supported/centos_latest.yaml thrashers/mapgap.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 901080 2017-03-10 16:37:59 2017-03-10 16:38:00 2017-03-10 16:58:00 0:20:00 0:07:48 0:12:12 ovh master rados/monthrash/{ceph/ceph.yaml clusters/3-mons.yaml fs/xfs.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/rados_5925.yaml} 2
pass 901081 2017-03-10 16:38:00 2017-03-10 16:38:02 2017-03-10 17:24:02 0:46:00 0:36:29 0:09:31 ovh master centos 7.3 rados/thrash-erasure-code-isa/{arch/x86_64.yaml clusters/{fixed-2.yaml openstack.yaml} fs/xfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore.yaml rados.yaml supported/centos_latest.yaml thrashers/pggrow.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 901082 2017-03-10 16:38:01 2017-03-10 16:38:02 2017-03-10 17:00:01 0:21:59 0:11:42 0:10:17 ovh master rados/singleton/{all/resolve_stuck_peering.yaml fs/xfs.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore.yaml rados.yaml} 2
Failure Reason:

Command failed on ovh016 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2'

fail 901083 2017-03-10 16:38:01 2017-03-10 16:38:02 2017-03-10 16:48:02 0:10:00 0:04:18 0:05:42 ovh master rados/objectstore/objectstore.yaml 1
Failure Reason:

Command failed on ovh013 with status 134: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'mkdir $TESTDIR/ostest && cd $TESTDIR/ostest && ulimit -c 0 && ulimit -Sn 4096 && ceph_test_objectstore --gtest_filter=-*/3'"