Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 2054747 2018-01-10 14:29:58 2018-01-10 14:34:23 2018-01-10 15:04:23 0:30:00 0:23:16 0:06:44 ovh master rados/standalone/osd.yaml 1
pass 2054749 2018-01-10 14:29:59 2018-01-10 14:36:50 2018-01-10 17:40:53 3:04:03 0:17:47 2:46:16 ovh master rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
pass 2054751 2018-01-10 14:30:00 2018-01-10 14:40:24 2018-01-10 15:12:24 0:32:00 0:27:48 0:04:12 ovh master rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} 1
pass 2054753 2018-01-10 14:30:00 2018-01-10 14:50:36 2018-01-10 15:30:36 0:40:00 0:31:58 0:08:02 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
pass 2054755 2018-01-10 14:30:01 2018-01-10 14:52:06 2018-01-10 15:30:06 0:38:00 0:29:29 0:08:31 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
pass 2054757 2018-01-10 14:30:02 2018-01-10 14:54:29 2018-01-10 15:54:29 1:00:00 0:48:56 0:11:04 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
pass 2054759 2018-01-10 14:30:02 2018-01-10 15:04:09 2018-01-10 15:26:09 0:22:00 0:16:51 0:05:09 ovh master rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml workloads/radosbench_4M_rand_read.yaml} 1
pass 2054761 2018-01-10 14:30:03 2018-01-10 15:04:24 2018-01-10 15:52:24 0:48:00 0:40:23 0:07:37 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
pass 2054763 2018-01-10 14:30:03 2018-01-10 15:06:32 2018-01-10 15:40:32 0:34:00 0:25:03 0:08:57 ovh master rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml} 2
pass 2054765 2018-01-10 14:30:04 2018-01-10 15:11:30 2018-01-10 15:47:30 0:36:00 0:25:57 0:10:03 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
pass 2054767 2018-01-10 14:30:05 2018-01-10 15:12:25 2018-01-10 15:42:25 0:30:00 0:25:46 0:04:14 ovh master rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
pass 2054769 2018-01-10 14:30:06 2018-01-10 15:26:19 2018-01-10 16:14:19 0:48:00 0:38:51 0:09:09 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
pass 2054772 2018-01-10 14:30:06 2018-01-10 15:30:09 2018-01-10 16:06:09 0:36:00 0:27:18 0:08:42 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_workunit_loadgen_mix.yaml} 2
pass 2054774 2018-01-10 14:30:07 2018-01-10 15:30:41 2018-01-10 16:08:40 0:37:59 0:29:11 0:08:48 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
pass 2054776 2018-01-10 14:30:08 2018-01-10 15:40:35 2018-01-10 16:08:34 0:27:59 0:18:12 0:09:47 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
pass 2054778 2018-01-10 14:30:09 2018-01-10 15:42:31 2018-01-10 17:02:32 1:20:01 1:02:24 0:17:37 ovh master rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
pass 2054780 2018-01-10 14:30:09 2018-01-10 15:47:38 2018-01-10 16:57:39 1:10:01 1:01:28 0:08:33 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
pass 2054782 2018-01-10 14:30:10 2018-01-10 15:52:35 2018-01-10 16:40:35 0:48:00 0:39:43 0:08:17 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
pass 2054784 2018-01-10 14:30:10 2018-01-10 15:54:31 2018-01-10 16:12:30 0:17:59 0:12:51 0:05:08 ovh master rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 1
pass 2054786 2018-01-10 14:30:11 2018-01-10 15:55:42 2018-01-10 16:57:42 1:02:00 0:54:07 0:07:53 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
pass 2054788 2018-01-10 14:30:12 2018-01-10 16:06:19 2018-01-10 16:40:19 0:34:00 0:26:23 0:07:37 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
pass 2054790 2018-01-10 14:30:13 2018-01-10 16:08:35 2018-01-10 16:58:36 0:50:01 0:40:58 0:09:03 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
pass 2054792 2018-01-10 14:30:13 2018-01-10 16:08:42 2018-01-10 16:30:41 0:21:59 0:16:43 0:05:16 ovh master rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml workloads/radosbench_4M_seq_read.yaml} 1
pass 2054794 2018-01-10 14:30:14 2018-01-10 16:12:39 2018-01-10 16:46:39 0:34:00 0:30:05 0:03:55 ovh master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} 1
pass 2054796 2018-01-10 14:30:14 2018-01-10 16:14:28 2018-01-10 17:34:29 1:20:01 0:43:48 0:36:13 ovh master rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
pass 2054798 2018-01-10 14:30:15 2018-01-10 16:30:43 2018-01-10 17:46:44 1:16:01 1:06:05 0:09:56 ovh master centos 7.4 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported/centos_latest.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 2054800 2018-01-10 14:30:16 2018-01-10 16:40:21 2018-01-10 17:00:20 0:19:59 0:13:29 0:06:30 ovh master rados/multimon/{clusters/6.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_clock_with_skews.yaml} 2
Failure Reason:

expected MON_CLOCK_SKEW but got none

pass 2054802 2018-01-10 14:30:16 2018-01-10 16:40:36 2018-01-10 17:18:36 0:38:00 0:29:38 0:08:22 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml} 2
pass 2054804 2018-01-10 14:30:17 2018-01-10 16:46:44 2018-01-10 17:28:44 0:42:00 0:28:52 0:13:08 ovh master rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
pass 2054806 2018-01-10 14:30:18 2018-01-10 16:57:49 2018-01-10 17:33:49 0:36:00 0:25:05 0:10:55 ovh master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} 2
pass 2054808 2018-01-10 14:30:18 2018-01-10 16:57:49 2018-01-10 17:21:48 0:23:59 0:18:59 0:05:00 ovh master rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
pass 2054810 2018-01-10 14:30:19 2018-01-10 16:58:36 2018-01-10 17:48:37 0:50:01 0:39:59 0:10:02 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
fail 2054812 2018-01-10 14:30:20 2018-01-11 01:59:25 2018-01-11 02:25:25 0:26:00 0:15:21 0:10:39 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh036 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054814 2018-01-10 14:30:20 2018-01-11 01:59:24 2018-01-11 02:19:26 0:20:02 0:12:30 0:07:32 ovh master rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh096 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054816 2018-01-10 14:30:21 2018-01-11 01:59:25 2018-01-11 02:29:26 0:30:01 0:16:15 0:13:46 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

Command failed on ovh086 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054818 2018-01-10 14:30:22 2018-01-11 01:59:24 2018-01-11 02:19:30 0:20:06 0:11:01 0:09:05 ovh master rados/objectstore/objectcacher-stress.yaml 1
Failure Reason:

Command failed on ovh095 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054820 2018-01-10 14:30:22 2018-01-11 01:59:25 2018-01-11 02:29:26 0:30:01 0:16:22 0:13:39 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_workunit_loadgen_mostlyread.yaml} 2
Failure Reason:

Command failed on ovh057 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054822 2018-01-10 14:30:23 2018-01-11 01:59:24 2018-01-11 02:23:26 0:24:02 0:14:32 0:09:30 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on ovh046 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054824 2018-01-10 14:30:23 2018-01-11 01:59:26 2018-01-11 02:29:26 0:30:00 0:16:13 0:13:47 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on ovh018 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054826 2018-01-10 14:30:24 2018-01-11 01:59:25 2018-01-11 02:21:27 0:22:02 0:12:17 0:09:45 ovh master rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054828 2018-01-10 14:30:25 2018-01-11 01:59:26 2018-01-11 02:25:26 0:26:00 0:15:00 0:11:00 ovh master rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync-many.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

Command failed on ovh067 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054830 2018-01-10 14:30:25 2018-01-11 01:59:26 2018-01-11 02:25:26 0:26:00 0:15:29 0:10:31 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh008 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054832 2018-01-10 14:30:26 2018-01-11 01:59:25 2018-01-11 02:23:26 0:24:01 0:14:06 0:09:55 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh001 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054834 2018-01-10 14:30:27 2018-01-11 01:59:27 2018-01-11 02:35:28 0:36:01 0:10:42 0:25:19 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh074 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054836 2018-01-10 14:30:27 2018-01-11 01:59:25 2018-01-11 02:27:27 0:28:02 0:16:23 0:11:39 ovh master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml tasks/workunits.yaml} 2
Failure Reason:

Command failed on ovh085 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054838 2018-01-10 14:30:28 2018-01-11 01:59:25 2018-01-11 02:21:26 0:22:01 0:12:43 0:09:18 ovh master rados/perf/{ceph.yaml objectstore/bluestore.yaml openstack.yaml settings/optimized.yaml workloads/radosbench_4M_write.yaml} 1
Failure Reason:

Command failed on ovh006 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054840 2018-01-10 14:30:29 2018-01-11 01:59:26 2018-01-11 02:21:26 0:22:00 0:13:39 0:08:21 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh070 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054842 2018-01-10 14:30:29 2018-01-11 01:59:27 2018-01-11 02:23:28 0:24:01 0:14:18 0:09:43 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh064 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054844 2018-01-10 14:30:30 2018-01-11 01:59:27 2018-01-11 02:21:29 0:22:02 0:13:30 0:08:32 ovh master rados/singleton/{all/mon-auth-caps.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh044 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054846 2018-01-10 14:30:31 2018-01-11 01:59:25 2018-01-11 02:27:26 0:28:01 0:15:26 0:12:35 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh063 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054848 2018-01-10 14:30:32 2018-01-11 01:59:27 2018-01-11 02:27:27 0:28:00 0:16:08 0:11:52 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh065 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054850 2018-01-10 14:30:32 2018-01-11 01:59:27 2018-01-11 02:21:29 0:22:02 0:12:53 0:09:09 ovh master rados/standalone/scrub.yaml 1
Failure Reason:

Command failed on ovh100 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054852 2018-01-10 14:30:33 2018-01-11 01:59:25 2018-01-11 02:31:27 0:32:02 0:16:20 0:15:42 ovh master rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command failed on ovh083 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054854 2018-01-10 14:30:34 2018-01-11 01:59:26 2018-01-11 02:27:27 0:28:01 0:16:01 0:12:00 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh048 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054856 2018-01-10 14:30:34 2018-01-11 01:59:27 2018-01-11 02:19:27 0:20:00 0:11:15 0:08:45 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on ovh016 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054858 2018-01-10 14:30:35 2018-01-11 01:59:26 2018-01-11 02:21:28 0:22:02 0:12:42 0:09:20 ovh master rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh019 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054860 2018-01-10 14:30:35 2018-01-11 01:59:26 2018-01-11 02:25:30 0:26:04 0:15:55 0:10:09 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/readwrite.yaml} 2
Failure Reason:

Command failed on ovh020 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054862 2018-01-10 14:30:36 2018-01-11 01:59:27 2018-01-11 02:37:28 0:38:01 0:16:10 0:21:51 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on ovh015 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054864 2018-01-10 14:30:37 2018-01-11 01:59:26 2018-01-11 02:21:29 0:22:03 0:12:35 0:09:28 ovh master rados/singleton/{all/mon-config-keys.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh060 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054865 2018-01-10 14:30:37 2018-01-11 01:59:27 2018-01-11 02:29:27 0:30:00 0:16:16 0:13:44 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on ovh040 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054867 2018-01-10 14:30:38 2018-01-11 01:59:27 2018-01-11 02:29:27 0:30:00 0:15:47 0:14:13 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh051 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054870 2018-01-10 14:30:39 2018-01-11 01:59:27 2018-01-11 02:35:27 0:36:00 0:10:43 0:25:17 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh026 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054871 2018-01-10 14:30:39 2018-01-11 01:59:27 2018-01-11 02:27:27 0:28:00 0:15:44 0:12:16 ovh master rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
Failure Reason:

Command failed on ovh097 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054873 2018-01-10 14:30:40 2018-01-11 01:59:28 2018-01-11 02:33:29 0:34:01 0:10:33 0:23:28 ovh master rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml workloads/sample_fio.yaml} 1
Failure Reason:

Command failed on ovh095 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054875 2018-01-10 14:30:40 2018-01-11 01:59:27 2018-01-11 02:29:28 0:30:01 0:15:13 0:14:48 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

Command failed on ovh014 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054877 2018-01-10 14:30:41 2018-01-11 01:59:27 2018-01-11 02:25:29 0:26:02 0:14:58 0:11:04 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on ovh003 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054879 2018-01-10 14:30:42 2018-01-11 01:59:28 2018-01-11 02:35:31 0:36:03 0:09:51 0:26:12 ovh master rados/singleton/{all/mon-seesaw.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh006 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054881 2018-01-10 14:30:42 2018-01-11 01:59:28 2018-01-11 02:25:30 0:26:02 0:12:07 0:13:55 ovh master rados/objectstore/objectstore.yaml 1
Failure Reason:

Command failed on ovh066 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054883 2018-01-10 14:30:43 2018-01-11 01:59:28 2018-01-11 02:39:29 0:40:01 0:10:44 0:29:17 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on ovh094 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054885 2018-01-10 14:30:43 2018-01-11 01:59:28 2018-01-11 02:39:30 0:40:02 0:10:20 0:29:42 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh044 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054887 2018-01-10 14:30:44 2018-01-11 01:59:28 2018-01-11 02:43:30 0:44:02 0:11:35 0:32:27 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh052 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054889 2018-01-10 14:30:45 2018-01-11 01:59:28 2018-01-11 02:39:30 0:40:02 0:11:25 0:28:37 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh076 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054891 2018-01-10 14:30:45 2018-01-11 01:59:28 2018-01-11 02:35:31 0:36:03 0:09:39 0:26:24 ovh master rados/singleton/{all/osd-backfill.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054893 2018-01-10 14:30:46 2018-01-11 01:59:28 2018-01-11 02:45:30 0:46:02 0:10:58 0:35:04 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh012 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054895 2018-01-10 14:30:47 2018-01-11 01:59:28 2018-01-11 02:47:30 0:48:02 0:12:29 0:35:33 ovh master rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

Command failed on ovh098 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054897 2018-01-10 14:30:47 2018-01-11 01:59:28 2018-01-11 02:43:30 0:44:02 0:11:41 0:32:21 ovh master ubuntu 16.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on ovh067 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054899 2018-01-10 14:30:48 2018-01-11 01:59:28 2018-01-11 02:43:30 0:44:02 0:11:32 0:32:30 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh097 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

pass 2054901 2018-01-10 14:30:49 2018-01-11 02:22:11 2018-01-11 02:47:24 0:25:13 0:19:08 0:06:05 ovh master centos rados/singleton-nomsgr/{all/valgrind-leaks.yaml rados.yaml} 1
fail 2054903 2018-01-10 14:30:49 2018-01-11 02:21:57 2018-01-11 02:47:21 0:25:24 0:12:05 0:13:19 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/repair_test.yaml} 2
Failure Reason:

Command failed on ovh032 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054905 2018-01-10 14:30:50 2018-01-11 02:22:08 2018-01-11 02:41:22 0:19:14 0:10:39 0:08:35 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh001 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054907 2018-01-10 14:30:50 2018-01-11 02:22:14 2018-01-11 02:43:32 0:21:18 0:11:47 0:09:31 ovh master rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync.yaml workloads/rados_5925.yaml} 2
Failure Reason:

Command failed on ovh020 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054909 2018-01-10 14:30:51 2018-01-11 02:22:13 2018-01-11 02:43:34 0:21:21 0:12:37 0:08:44 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh046 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054911 2018-01-10 14:30:52 2018-01-11 02:22:14 2018-01-11 02:37:38 0:15:24 0:09:57 0:05:27 ovh master rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml workloads/sample_radosbench.yaml} 1
Failure Reason:

Command failed on ovh100 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054913 2018-01-10 14:30:52 2018-01-11 02:22:18 2018-01-11 02:43:42 0:21:24 0:10:47 0:10:37 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh036 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054915 2018-01-10 14:30:53 2018-01-11 02:22:18 2018-01-11 02:39:35 0:17:17 0:10:32 0:06:45 ovh master rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh031 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054917 2018-01-10 14:30:54 2018-01-11 02:22:18 2018-01-11 03:07:46 0:45:28 0:11:37 0:33:51 ovh master rados/multimon/{clusters/9.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_recovery.yaml} 3
Failure Reason:

Command failed on ovh070 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054919 2018-01-10 14:30:54 2018-01-11 02:22:18 2018-01-11 02:45:41 0:23:23 0:09:57 0:13:26 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on ovh040 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054921 2018-01-10 14:30:55 2018-01-11 02:23:39 2018-01-11 02:47:38 0:23:59 0:11:43 0:12:16 ovh master rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} 2
Failure Reason:

Command failed on ovh010 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

pass 2054923 2018-01-10 14:30:56 2018-01-11 02:23:39 2018-01-11 03:11:39 0:48:00 0:34:09 0:13:51 ovh master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
fail 2054925 2018-01-10 14:30:56 2018-01-11 02:23:39 2018-01-11 02:47:38 0:23:59 0:12:26 0:11:33 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on ovh014 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054927 2018-01-10 14:30:57 2018-01-11 02:25:27 2018-01-11 02:47:27 0:22:00 0:12:33 0:09:27 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on ovh085 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054929 2018-01-10 14:30:58 2018-01-11 02:25:27 2018-01-11 02:45:27 0:20:00 0:10:46 0:09:14 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh065 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054931 2018-01-10 14:30:58 2018-01-11 02:25:27 2018-01-11 02:47:27 0:22:00 0:12:37 0:09:23 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh018 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054933 2018-01-10 14:30:59 2018-01-11 02:25:30 2018-01-11 02:39:29 0:13:59 0:10:18 0:03:41 ovh master rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh066 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054935 2018-01-10 14:31:00 2018-01-11 02:25:31 2018-01-11 02:47:31 0:22:00 0:12:03 0:09:57 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

Command failed on ovh057 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054937 2018-01-10 14:31:00 2018-01-11 02:25:32 2018-01-11 02:57:32 0:32:00 0:10:24 0:21:36 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

{'ovh100.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['--2018-01-11 02:52:30-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130', 'Connecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.', 'HTTP request sent, awaiting response... 200 OK', 'Length: 16585 (16K) [application/x-gzip]', u'Saving to: \u2018/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6716\u2019', '', ' 0K .......... ...... 100% 732K=0.02s', '', u'2018-01-11 02:52:30 (732 KB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6716\u2019 saved [16585/16585]', '', 'Warning: prerequisite Class::Accessor::Fast 0 not found.', 'Warning: prerequisite Digest::HMAC_SHA1 0 not found.', 'Warning: prerequisite Digest::MD5::File 0 not found.', 'Warning: prerequisite LWP::UserAgent::Determined 0 not found.', 'Warning: prerequisite XML::Simple 1.08 not found.', '--2018-01-11 02:52:30-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 02:52:30-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130', 'Connecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.', 'HTTP request sent, awaiting response... 200 OK', 'Length: 5259 (5.1K) [application/x-gzip]', u'Saving to: \u2018/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716\u2019', '', ' 0K ..... 100% 598M=0s', '', u'2018-01-11 02:52:30 (598 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716\u2019 saved [5259/5259]', '', '--2018-01-11 02:53:10-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130', 'Connecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.', 'HTTP request sent, awaiting response... 200 OK', 'Length: 35629 (35K)', u'Saving to: \u2018/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/CHECKSUMS.tmp6716\u2019', '', ' 0K .......... .......... .......... .... 100% 1.53M=0.02s', '', u'2018-01-11 02:53:10 (1.53 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/CHECKSUMS.tmp6716\u2019 saved [35629/35629]', '', '--2018-01-11 02:53:11-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 02:53:11-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130', 'Connecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.', 'HTTP request sent, awaiting response... 200 OK', 'Length: 6328 (6.2K)', u'Saving to: \u2018/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716\u2019', '', ' 0K ...... 100% 593M=0s', '', u'2018-01-11 02:53:12 (593 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716\u2019 saved [6328/6328]', '', '--2018-01-11 02:53:13-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130', 'Connecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.', 'HTTP request sent, awaiting response... 200 OK', 'Length: 77937 (76K) [application/x-gzip]', u'Saving to: \u2018/home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz.tmp6716\u2019', '', ' 0K .......... .......... .......... .......... .......... 65% 1.11M 0s', ' 50K .......... .......... ...... 100% 16.2M=0.05s', '', u'2018-01-11 02:53:13 (1.63 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz.tmp6716\u2019 saved [77937/77937]', '', 'Warning: prerequisite XML::NamespaceSupport 1.04 not found.', 'Warning: prerequisite XML::SAX 0.15 not found.', 'Warning: prerequisite XML::SAX::Expat 0 not found.', 'Warning: prerequisite XML::NamespaceSupport 0.03 not found.', 'Warning: prerequisite XML::SAX 0.03 not found.', 'Warning: prerequisite XML::SAX::Base 1.00 not found.', '--2018-01-11 02:53:15-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 02:53:15-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 02:53:15-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 02:53:15-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019'], 'cmd': ['cpan', 'Amazon::S3'], 'end': '2018-01-11 02:53:16.308962', '_ansible_no_log': False, 'stdout': 'Loading internal null logger. Install Log::Log4perl for logging messages\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading \'/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz\'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading \'/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz\'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 699 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I\'ll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading \'/home/ubuntu/.cpan/sources/modules/03modlist.data.gz\'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module \'Amazon::S3\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz ok\n\'YAML\' not installed, will not store persistent state\nConfiguring T/TI/TIMA/Amazon-S3-0.45.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for Amazon::S3\nWriting MYMETA.yml and MYMETA.json\n TIMA/Amazon-S3-0.45.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for T/TI/TIMA/Amazon-S3-0.45.tar.gz\n---- Unsatisfied dependencies detected during ----\n---- TIMA/Amazon-S3-0.45.tar.gz ----\n Digest::MD5::File [requires]\n LWP::UserAgent::Determined [requires]\n Digest::HMAC_SHA1 [requires]\n Class::Accessor::Fast [requires]\n XML::Simple [requires]\nRunning install for module \'Digest::MD5::File\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716\'. Giving up on it.\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz ok\nConfiguring D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for Digest::MD5::File\nWriting MYMETA.yml and MYMETA.json\n DMUEY/Digest-MD5-File-0.08.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\ncp File.pm blib/lib/Digest/MD5/File.pm\nManifying 1 pod document\n DMUEY/Digest-MD5-File-0.08.tar.gz\n /usr/bin/make -- OK\nRunning make test\nPERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t\nt/1.t .. ok\nAll tests successful.\nFiles=1, Tests=6, 0 wallclock secs ( 0.01 usr 0.01 sys + 0.06 cusr 0.00 csys = 0.08 CPU)\nResult: PASS\n DMUEY/Digest-MD5-File-0.08.tar.gz\n /usr/bin/make test -- OK\nRunning make install\nManifying 1 pod document\nInstalling /usr/local/share/perl/5.22.1/Digest/MD5/File.pm\nInstalling /usr/local/man/man3/Digest::MD5::File.3pm\nAppending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod\n DMUEY/Digest-MD5-File-0.08.tar.gz\n /usr/bin/make install -- OK\nRunning install for module \'LWP::UserAgent::Determined\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/A/AL/ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/A/AL/ALEXMV/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/A/AL/ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz ok\nConfiguring A/AL/ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for LWP::UserAgent::Determined\nWriting MYMETA.yml and MYMETA.json\n ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for A/AL/ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz\ncp lib/LWP/UserAgent/Determined.pm blib/lib/LWP/UserAgent/Determined.pm\nManifying 1 pod document\n ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz\n /usr/bin/make -- OK\nRunning make test\nPERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t\nt/01_about_verbose.t .... ok\nt/10_determined_test.t .. ok\nAll tests successful.\nFiles=2, Tests=15, 38 wallclock secs ( 0.04 usr 0.00 sys + 0.10 cusr 0.00 csys = 0.14 CPU)\nResult: PASS\n ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz\n /usr/bin/make test -- OK\nRunning make install\nManifying 1 pod document\nInstalling /usr/local/share/perl/5.22.1/LWP/UserAgent/Determined.pm\nInstalling /usr/local/man/man3/LWP::UserAgent::Determined.3pm\nAppending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod\n ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz\n /usr/bin/make install -- OK\nRunning install for module \'Digest::HMAC_SHA1\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS.gz\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/CHECKSUMS.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz ok\nConfiguring G/GA/GAAS/Digest-HMAC-1.03.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for Digest::HMAC\nWriting MYMETA.yml and MYMETA.json\n GAAS/Digest-HMAC-1.03.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for G/GA/GAAS/Digest-HMAC-1.03.tar.gz\ncp lib/Digest/HMAC_MD5.pm blib/lib/Digest/HMAC_MD5.pm\ncp lib/Digest/HMAC_SHA1.pm blib/lib/Digest/HMAC_SHA1.pm\ncp lib/Digest/HMAC.pm blib/lib/Digest/HMAC.pm\nManifying 3 pod documents\n GAAS/Digest-HMAC-1.03.tar.gz\n /usr/bin/make -- OK\nRunning make test\nPERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t\nt/rfc2202.t .. ok\nAll tests successful.\nFiles=1, Tests=14, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.01 cusr 0.00 csys = 0.04 CPU)\nResult: PASS\n GAAS/Digest-HMAC-1.03.tar.gz\n /usr/bin/make test -- OK\nRunning make install\nManifying 3 pod documents\nInstalling /usr/local/share/perl/5.22.1/Digest/HMAC.pm\nInstalling /usr/local/share/perl/5.22.1/Digest/HMAC_SHA1.pm\nInstalling /usr/local/share/perl/5.22.1/Digest/HMAC_MD5.pm\nInstalling /usr/local/man/man3/Digest::HMAC.3pm\nInstalling /usr/local/man/man3/Digest::HMAC_MD5.3pm\nInstalling /usr/local/man/man3/Digest::HMAC_SHA1.3pm\nAppending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod\n GAAS/Digest-HMAC-1.03.tar.gz\n /usr/bin/make install -- OK\nRunning install for module \'Class::Accessor::Fast\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/Class-Accessor-0.34.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS.gz\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716\'. Giving up on it.\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS.gz\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/Class-Accessor-0.34.tar.gz ok\nConfiguring K/KA/KASEI/Class-Accessor-0.34.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for Class::Accessor\nWriting MYMETA.yml and MYMETA.json\n KASEI/Class-Accessor-0.34.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for K/KA/KASEI/Class-Accessor-0.34.tar.gz\ncp lib/Class/Accessor.pm blib/lib/Class/Accessor.pm\ncp lib/Class/Accessor/Fast.pm blib/lib/Class/Accessor/Fast.pm\ncp lib/Class/Accessor/Faster.pm blib/lib/Class/Accessor/Faster.pm\nManifying 3 pod documents\n KASEI/Class-Accessor-0.34.tar.gz\n /usr/bin/make -- OK\nRunning make test\nPERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t\nt/accessors.t ..... ok\nt/aliases.t ....... ok\nt/antlers.t ....... ok\nt/bestpractice.t .. ok\nt/caller.t ........ skipped: Sub::Name is not installed\nt/croak.t ......... ok\nt/getset.t ........ ok\nAll tests successful.\nFiles=7, Tests=133, 0 wallclock secs ( 0.03 usr 0.03 sys + 0.16 cusr 0.02 csys = 0.24 CPU)\nResult: PASS\n KASEI/Class-Accessor-0.34.tar.gz\n /usr/bin/make test -- OK\nRunning make install\nManifying 3 pod documents\nInstalling /usr/local/share/perl/5.22.1/Class/Accessor.pm\nInstalling /usr/local/share/perl/5.22.1/Class/Accessor/Fast.pm\nInstalling /usr/local/share/perl/5.22.1/Class/Accessor/Faster.pm\nInstalling /usr/local/man/man3/Class::Accessor::Faster.3pm\nInstalling /usr/local/man/man3/Class::Accessor.3pm\nInstalling /usr/local/man/man3/Class::Accessor::Fast.3pm\nAppending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod\n KASEI/Class-Accessor-0.34.tar.gz\n /usr/bin/make install -- OK\nRunning install for module \'XML::Simple\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz ok\nConfiguring G/GR/GRANTM/XML-Simple-2.22.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for XML::Simple\nWriting MYMETA.yml and MYMETA.json\n GRANTM/XML-Simple-2.22.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for G/GR/GRANTM/XML-Simple-2.22.tar.gz\n---- Unsatisfied dependencies detected during ----\n---- GRANTM/XML-Simple-2.22.tar.gz ----\n XML::SAX::Expat [requires]\n XML::NamespaceSupport [requires]\n XML::SAX [requires]\nRunning install for module \'XML::SAX::Expat\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/B/BJ/BJOERN/XML-SAX-Expat-0.51.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/B/BJ/BJOERN/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/B/BJ/BJOERN/XML-SAX-Expat-0.51.tar.gz ok\nConfiguring B/BJ/BJOERN/XML-SAX-Expat-0.51.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for XML::SAX::Expat\nWriting MYMETA.yml and MYMETA.json\n BJOERN/XML-SAX-Expat-0.51.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for B/BJ/BJOERN/XML-SAX-Expat-0.51.tar.gz\n---- Unsatisfied dependencies detected during ----\n---- BJOERN/XML-SAX-Expat-0.51.tar.gz ----\n XML::SAX::Base [requires]\n XML::NamespaceSupport [requires]\n XML::SAX [requires]\nRunning install for module \'XML::SAX::Base\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-SAX-Base-1.08.tar.gz\nChecksum for /home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-SAX-Base-1.08.tar.gz ok\nConfiguring G/GR/GRANTM/XML-SAX-Base-1.08.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for XML::SAX::Base\nWriting MYMETA.yml and MYMETA.json\n GRANTM/XML-SAX-Base-1.08.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for G/GR/GRANTM/XML-SAX-Base-1.08.tar.gz\ncp lib/XML/SAX/Exception.pm blib/lib/XML/SAX/Exception.pm\ncp BuildSAXBase.pl blib/lib/XML/SAX/BuildSAXBase.pl\ncp lib/XML/SAX/Base.pm blib/lib/XML/SAX/Base.pm\nManifying 3 pod documents\n GRANTM/XML-SAX-Base-1.08.tar.gz\n /usr/bin/make -- OK\nRunning make test\nPERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t\nt/00basic.t ............. ok\nt/01exception.t ......... ok\nt/01simpledriver.t ...... ok\nt/02simplefilter.t ...... ok\nt/03chdriver.t .......... ok\nt/04chfilter.t .......... ok\nt/05dtdhdriver.t ........ ok\nt/06lexhdriver.t ........ ok\nt/07declhdriver.t ....... ok\nt/08errorhdriver.t ...... ok\nt/09resoldriver.t ....... ok\nt/10dochdriver.t ........ ok\nt/11sax1multiclass.t .... ok\nt/12sax2multiclass.t .... ok\nt/13handlerswitch.t ..... ok\nt/14downstreamswitch.t .. ok\nt/15parentswitch.t ...... ok\nt/16gethandlers.t ....... ok\nt/release-pod-syntax.t .. skipped: these tests are for release candidate testing\nAll tests successful.\nFiles=19, Tests=137, 1 wallclock secs ( 0.07 usr 0.01 sys + 0.45 cusr 0.05 csys = 0.58 CPU)\nResult: PASS\n GRANTM/XML-SAX-Base-1.08.tar.gz\n /usr/bin/make test -- OK\nRunning make install\nManifying 3 pod documents\nInstalling /usr/local/share/perl/5.22.1/XML/SAX/Base.pm\nInstalling /usr/local/share/perl/5.22.1/XML/SAX/BuildSAXBase.pl\nInstalling /usr/local/share/perl/5.22.1/XML/SAX/Exception.pm\nInstalling /usr/local/man/man3/XML::SAX::Exception.3pm\nInstalling /usr/local/man/man3/XML::SAX::BuildSAXBase.3pm\nInstalling /usr/local/man/man3/XML::SAX::Base.3pm\nAppending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod\n GRANTM/XML-SAX-Base-1.08.tar.gz\n /usr/bin/make install -- OK\nRunning install for module \'XML::NamespaceSupport\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716\'. Giving up on it.\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716\'. Giving up on it.\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716\'. Giving up on it.\nNo external ftp command available\n\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716\'. Giving up on it.\nNo external ftp command available\n\nPlease check, if the URLs I found in your configuration file\n(http://apt-mirror.sepia.ceph.com/CPAN/) are valid. The urllist can be\nedited. E.g. with \'o conf urllist push ftp://myurl/\'\n\nCould not fetch authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'changed': True, 'msg': 'non-zero return code', 'start': '2018-01-11 02:52:20.567189', 'delta': '0:00:55.741773', 'stderr': u'--2018-01-11 02:52:30-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130\nConnecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 16585 (16K) [application/x-gzip]\nSaving to: \u2018/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6716\u2019\n\n 0K .......... ...... 100% 732K=0.02s\n\n2018-01-11 02:52:30 (732 KB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6716\u2019 saved [16585/16585]\n\nWarning: prerequisite Class::Accessor::Fast 0 not found.\nWarning: prerequisite Digest::HMAC_SHA1 0 not found.\nWarning: prerequisite Digest::MD5::File 0 not found.\nWarning: prerequisite LWP::UserAgent::Determined 0 not found.\nWarning: prerequisite XML::Simple 1.08 not found.\n--2018-01-11 02:52:30-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 02:52:30-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130\nConnecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 5259 (5.1K) [application/x-gzip]\nSaving to: \u2018/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716\u2019\n\n 0K ..... 100% 598M=0s\n\n2018-01-11 02:52:30 (598 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716\u2019 saved [5259/5259]\n\n--2018-01-11 02:53:10-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130\nConnecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 35629 (35K)\nSaving to: \u2018/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/CHECKSUMS.tmp6716\u2019\n\n 0K .......... .......... .......... .... 100% 1.53M=0.02s\n\n2018-01-11 02:53:10 (1.53 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/CHECKSUMS.tmp6716\u2019 saved [35629/35629]\n\n--2018-01-11 02:53:11-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 02:53:11-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130\nConnecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 6328 (6.2K)\nSaving to: \u2018/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716\u2019\n\n 0K ...... 100% 593M=0s\n\n2018-01-11 02:53:12 (593 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716\u2019 saved [6328/6328]\n\n--2018-01-11 02:53:13-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130\nConnecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 77937 (76K) [application/x-gzip]\nSaving to: \u2018/home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz.tmp6716\u2019\n\n 0K .......... .......... .......... .......... .......... 65% 1.11M 0s\n 50K .......... .......... ...... 100% 16.2M=0.05s\n\n2018-01-11 02:53:13 (1.63 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz.tmp6716\u2019 saved [77937/77937]\n\nWarning: prerequisite XML::NamespaceSupport 1.04 not found.\nWarning: prerequisite XML::SAX 0.15 not found.\nWarning: prerequisite XML::SAX::Expat 0 not found.\nWarning: prerequisite XML::NamespaceSupport 0.03 not found.\nWarning: prerequisite XML::SAX 0.03 not found.\nWarning: prerequisite XML::SAX::Base 1.00 not found.\n--2018-01-11 02:53:15-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 02:53:15-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 02:53:15-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 02:53:15-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', 'rc': 2, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'cpan Amazon::S3', 'removes': None, 'warn': True, '_uses_shell': False, 'stdin': None}}, 'stdout_lines': ['Loading internal null logger. Install Log::Log4perl for logging messages', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 699 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz ok', "'YAML' not installed, will not store persistent state", 'Configuring T/TI/TIMA/Amazon-S3-0.45.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for Amazon::S3', 'Writing MYMETA.yml and MYMETA.json', ' TIMA/Amazon-S3-0.45.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for T/TI/TIMA/Amazon-S3-0.45.tar.gz', '---- Unsatisfied dependencies detected during ----', '---- TIMA/Amazon-S3-0.45.tar.gz ----', ' Digest::MD5::File [requires]', ' LWP::UserAgent::Determined [requires]', ' Digest::HMAC_SHA1 [requires]', ' Class::Accessor::Fast [requires]', ' XML::Simple [requires]', "Running install for module 'Digest::MD5::File'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716'. Giving up on it.", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz ok', 'Configuring D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for Digest::MD5::File', 'Writing MYMETA.yml and MYMETA.json', ' DMUEY/Digest-MD5-File-0.08.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', 'cp File.pm blib/lib/Digest/MD5/File.pm', 'Manifying 1 pod document', ' DMUEY/Digest-MD5-File-0.08.tar.gz', ' /usr/bin/make -- OK', 'Running make test', 'PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t', 't/1.t .. ok', 'All tests successful.', 'Files=1, Tests=6, 0 wallclock secs ( 0.01 usr 0.01 sys + 0.06 cusr 0.00 csys = 0.08 CPU)', 'Result: PASS', ' DMUEY/Digest-MD5-File-0.08.tar.gz', ' /usr/bin/make test -- OK', 'Running make install', 'Manifying 1 pod document', 'Installing /usr/local/share/perl/5.22.1/Digest/MD5/File.pm', 'Installing /usr/local/man/man3/Digest::MD5::File.3pm', 'Appending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod', ' DMUEY/Digest-MD5-File-0.08.tar.gz', ' /usr/bin/make install -- OK', "Running install for module 'LWP::UserAgent::Determined'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/A/AL/ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/A/AL/ALEXMV/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/A/AL/ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz ok', 'Configuring A/AL/ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for LWP::UserAgent::Determined', 'Writing MYMETA.yml and MYMETA.json', ' ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for A/AL/ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz', 'cp lib/LWP/UserAgent/Determined.pm blib/lib/LWP/UserAgent/Determined.pm', 'Manifying 1 pod document', ' ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz', ' /usr/bin/make -- OK', 'Running make test', 'PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t', 't/01_about_verbose.t .... ok', 't/10_determined_test.t .. ok', 'All tests successful.', 'Files=2, Tests=15, 38 wallclock secs ( 0.04 usr 0.00 sys + 0.10 cusr 0.00 csys = 0.14 CPU)', 'Result: PASS', ' ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz', ' /usr/bin/make test -- OK', 'Running make install', 'Manifying 1 pod document', 'Installing /usr/local/share/perl/5.22.1/LWP/UserAgent/Determined.pm', 'Installing /usr/local/man/man3/LWP::UserAgent::Determined.3pm', 'Appending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod', ' ALEXMV/LWP-UserAgent-Determined-1.07.tar.gz', ' /usr/bin/make install -- OK', "Running install for module 'Digest::HMAC_SHA1'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS.gz', '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/CHECKSUMS.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz ok', 'Configuring G/GA/GAAS/Digest-HMAC-1.03.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for Digest::HMAC', 'Writing MYMETA.yml and MYMETA.json', ' GAAS/Digest-HMAC-1.03.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for G/GA/GAAS/Digest-HMAC-1.03.tar.gz', 'cp lib/Digest/HMAC_MD5.pm blib/lib/Digest/HMAC_MD5.pm', 'cp lib/Digest/HMAC_SHA1.pm blib/lib/Digest/HMAC_SHA1.pm', 'cp lib/Digest/HMAC.pm blib/lib/Digest/HMAC.pm', 'Manifying 3 pod documents', ' GAAS/Digest-HMAC-1.03.tar.gz', ' /usr/bin/make -- OK', 'Running make test', 'PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t', 't/rfc2202.t .. ok', 'All tests successful.', 'Files=1, Tests=14, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.01 cusr 0.00 csys = 0.04 CPU)', 'Result: PASS', ' GAAS/Digest-HMAC-1.03.tar.gz', ' /usr/bin/make test -- OK', 'Running make install', 'Manifying 3 pod documents', 'Installing /usr/local/share/perl/5.22.1/Digest/HMAC.pm', 'Installing /usr/local/share/perl/5.22.1/Digest/HMAC_SHA1.pm', 'Installing /usr/local/share/perl/5.22.1/Digest/HMAC_MD5.pm', 'Installing /usr/local/man/man3/Digest::HMAC.3pm', 'Installing /usr/local/man/man3/Digest::HMAC_MD5.3pm', 'Installing /usr/local/man/man3/Digest::HMAC_SHA1.3pm', 'Appending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod', ' GAAS/Digest-HMAC-1.03.tar.gz', ' /usr/bin/make install -- OK', "Running install for module 'Class::Accessor::Fast'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/Class-Accessor-0.34.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS.gz', '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716'. Giving up on it.", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS.gz', '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/CHECKSUMS.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/K/KA/KASEI/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/K/KA/KASEI/Class-Accessor-0.34.tar.gz ok', 'Configuring K/KA/KASEI/Class-Accessor-0.34.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for Class::Accessor', 'Writing MYMETA.yml and MYMETA.json', ' KASEI/Class-Accessor-0.34.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for K/KA/KASEI/Class-Accessor-0.34.tar.gz', 'cp lib/Class/Accessor.pm blib/lib/Class/Accessor.pm', 'cp lib/Class/Accessor/Fast.pm blib/lib/Class/Accessor/Fast.pm', 'cp lib/Class/Accessor/Faster.pm blib/lib/Class/Accessor/Faster.pm', 'Manifying 3 pod documents', ' KASEI/Class-Accessor-0.34.tar.gz', ' /usr/bin/make -- OK', 'Running make test', 'PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t', 't/accessors.t ..... ok', 't/aliases.t ....... ok', 't/antlers.t ....... ok', 't/bestpractice.t .. ok', 't/caller.t ........ skipped: Sub::Name is not installed', 't/croak.t ......... ok', 't/getset.t ........ ok', 'All tests successful.', 'Files=7, Tests=133, 0 wallclock secs ( 0.03 usr 0.03 sys + 0.16 cusr 0.02 csys = 0.24 CPU)', 'Result: PASS', ' KASEI/Class-Accessor-0.34.tar.gz', ' /usr/bin/make test -- OK', 'Running make install', 'Manifying 3 pod documents', 'Installing /usr/local/share/perl/5.22.1/Class/Accessor.pm', 'Installing /usr/local/share/perl/5.22.1/Class/Accessor/Fast.pm', 'Installing /usr/local/share/perl/5.22.1/Class/Accessor/Faster.pm', 'Installing /usr/local/man/man3/Class::Accessor::Faster.3pm', 'Installing /usr/local/man/man3/Class::Accessor.3pm', 'Installing /usr/local/man/man3/Class::Accessor::Fast.3pm', 'Appending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod', ' KASEI/Class-Accessor-0.34.tar.gz', ' /usr/bin/make install -- OK', "Running install for module 'XML::Simple'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz ok', 'Configuring G/GR/GRANTM/XML-Simple-2.22.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for XML::Simple', 'Writing MYMETA.yml and MYMETA.json', ' GRANTM/XML-Simple-2.22.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for G/GR/GRANTM/XML-Simple-2.22.tar.gz', '---- Unsatisfied dependencies detected during ----', '---- GRANTM/XML-Simple-2.22.tar.gz ----', ' XML::SAX::Expat [requires]', ' XML::NamespaceSupport [requires]', ' XML::SAX [requires]', "Running install for module 'XML::SAX::Expat'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/B/BJ/BJOERN/XML-SAX-Expat-0.51.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/B/BJ/BJOERN/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/B/BJ/BJOERN/XML-SAX-Expat-0.51.tar.gz ok', 'Configuring B/BJ/BJOERN/XML-SAX-Expat-0.51.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for XML::SAX::Expat', 'Writing MYMETA.yml and MYMETA.json', ' BJOERN/XML-SAX-Expat-0.51.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for B/BJ/BJOERN/XML-SAX-Expat-0.51.tar.gz', '---- Unsatisfied dependencies detected during ----', '---- BJOERN/XML-SAX-Expat-0.51.tar.gz ----', ' XML::SAX::Base [requires]', ' XML::NamespaceSupport [requires]', ' XML::SAX [requires]', "Running install for module 'XML::SAX::Base'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-SAX-Base-1.08.tar.gz', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-SAX-Base-1.08.tar.gz ok', 'Configuring G/GR/GRANTM/XML-SAX-Base-1.08.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for XML::SAX::Base', 'Writing MYMETA.yml and MYMETA.json', ' GRANTM/XML-SAX-Base-1.08.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for G/GR/GRANTM/XML-SAX-Base-1.08.tar.gz', 'cp lib/XML/SAX/Exception.pm blib/lib/XML/SAX/Exception.pm', 'cp BuildSAXBase.pl blib/lib/XML/SAX/BuildSAXBase.pl', 'cp lib/XML/SAX/Base.pm blib/lib/XML/SAX/Base.pm', 'Manifying 3 pod documents', ' GRANTM/XML-SAX-Base-1.08.tar.gz', ' /usr/bin/make -- OK', 'Running make test', 'PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t', 't/00basic.t ............. ok', 't/01exception.t ......... ok', 't/01simpledriver.t ...... ok', 't/02simplefilter.t ...... ok', 't/03chdriver.t .......... ok', 't/04chfilter.t .......... ok', 't/05dtdhdriver.t ........ ok', 't/06lexhdriver.t ........ ok', 't/07declhdriver.t ....... ok', 't/08errorhdriver.t ...... ok', 't/09resoldriver.t ....... ok', 't/10dochdriver.t ........ ok', 't/11sax1multiclass.t .... ok', 't/12sax2multiclass.t .... ok', 't/13handlerswitch.t ..... ok', 't/14downstreamswitch.t .. ok', 't/15parentswitch.t ...... ok', 't/16gethandlers.t ....... ok', 't/release-pod-syntax.t .. skipped: these tests are for release candidate testing', 'All tests successful.', 'Files=19, Tests=137, 1 wallclock secs ( 0.07 usr 0.01 sys + 0.45 cusr 0.05 csys = 0.58 CPU)', 'Result: PASS', ' GRANTM/XML-SAX-Base-1.08.tar.gz', ' /usr/bin/make test -- OK', 'Running make install', 'Manifying 3 pod documents', 'Installing /usr/local/share/perl/5.22.1/XML/SAX/Base.pm', 'Installing /usr/local/share/perl/5.22.1/XML/SAX/BuildSAXBase.pl', 'Installing /usr/local/share/perl/5.22.1/XML/SAX/Exception.pm', 'Installing /usr/local/man/man3/XML::SAX::Exception.3pm', 'Installing /usr/local/man/man3/XML::SAX::BuildSAXBase.3pm', 'Installing /usr/local/man/man3/XML::SAX::Base.3pm', 'Appending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod', ' GRANTM/XML-SAX-Base-1.08.tar.gz', ' /usr/bin/make install -- OK', "Running install for module 'XML::NamespaceSupport'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716'. Giving up on it.", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716'. Giving up on it.", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716'. Giving up on it.", 'No external ftp command available', '', '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6716'. Giving up on it.", 'No external ftp command available', '', 'Please check, if the URLs I found in your configuration file', '(http://apt-mirror.sepia.ceph.com/CPAN/) are valid. The urllist can be', "edited. E.g. with 'o conf urllist push ftp://myurl/'", '', 'Could not fetch authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz'], 'failed': True}}

fail 2054939 2018-01-10 14:31:01 2018-01-11 02:27:34 2018-01-11 02:57:33 0:29:59 0:10:58 0:19:01 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on ovh019 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054941 2018-01-10 14:31:01 2018-01-11 02:27:34 2018-01-11 03:05:34 0:38:00 0:11:04 0:26:56 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rgw_snaps.yaml} 2
Failure Reason:

Command failed on ovh002 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054943 2018-01-10 14:31:02 2018-01-11 02:27:33 2018-01-11 02:49:33 0:22:00 0:11:46 0:10:14 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh090 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054944 2018-01-10 14:31:05 2018-01-11 02:27:34 2018-01-11 02:53:33 0:25:59 0:10:36 0:15:23 ovh master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml tasks/dashboard.yaml} 2
Failure Reason:

Command failed on ovh038 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054946 2018-01-10 14:31:05 2018-01-11 02:27:34 2018-01-11 02:43:33 0:15:59 0:09:48 0:06:11 ovh master rados/objectstore/alloc-hint.yaml 1
Failure Reason:

Command failed on ovh086 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054948 2018-01-10 14:31:06 2018-01-11 02:29:27 2018-01-11 02:49:27 0:20:00 0:10:20 0:09:40 ovh master rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml workloads/fio_4K_rand_read.yaml} 1
Failure Reason:

Command failed on ovh028 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054950 2018-01-10 14:31:07 2018-01-11 02:29:27 2018-01-11 02:47:27 0:18:00 0:10:43 0:07:17 ovh master rados/rest/mgr-restful.yaml 1
Failure Reason:

Command failed on ovh095 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054952 2018-01-10 14:31:07 2018-01-11 02:29:28 2018-01-11 02:49:27 0:19:59 0:11:11 0:08:48 ovh master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh096 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054954 2018-01-10 14:31:08 2018-01-11 02:29:28 2018-01-11 02:49:28 0:20:00 0:11:24 0:08:36 ovh master rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054956 2018-01-10 14:31:09 2018-01-11 02:29:28 2018-01-11 02:51:28 0:22:00 0:10:38 0:11:22 ovh master rados/standalone/crush.yaml 1
Failure Reason:

Command failed on ovh006 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054958 2018-01-10 14:31:09 2018-01-11 02:29:29 2018-01-11 05:05:32 2:36:03 0:10:24 2:25:39 ovh master rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

{'ovh085.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', 'E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?'], 'changed': False, '_ansible_no_log': False, 'stdout': "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8\nSuggested packages:\n krb5-doc\nThe following NEW packages will be installed:\n krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9\n libkdb5-8\n0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.\nNeed to get 302 kB of archives.\nAfter this operation, 1296 kB of additional disk space will be used.\nErr:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nGet:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2 [54.5 kB]\nGet:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2 [36.7 kB]\nGet:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2 [37.1 kB]\nGet:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]\nGet:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]\nFetched 278 kB in 22s (12.3 kB/s)\n", 'cache_updated': False, 'invocation': {'module_args': {'autoremove': False, 'force': False, 'force_apt_get': False, 'update_cache': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0, 'dpkg_options': 'force-confdef,force-confold', 'upgrade': None, 'name': 'krb5-user', 'package': ['krb5-user'], 'autoclean': False, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'default_release': None, 'install_recommends': None}}, 'failed': True, 'stderr': "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", 'rc': 100, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'krb5-user\'\' failed: E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n', 'stdout_lines': ['Reading package lists...', 'Building dependency tree...', 'Reading state information...', 'The following additional packages will be installed:', ' krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8', 'Suggested packages:', ' krb5-doc', 'The following NEW packages will be installed:', ' krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9', ' libkdb5-8', '0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.', 'Need to get 302 kB of archives.', 'After this operation, 1296 kB of additional disk space will be used.', 'Err:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Get:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2 [54.5 kB]', 'Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2 [36.7 kB]', 'Get:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2 [37.1 kB]', 'Get:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]', 'Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]', 'Fetched 278 kB in 22s (12.3 kB/s)'], 'cache_update_time': 1515645338}}

fail 2054960 2018-01-10 14:31:10 2018-01-11 02:31:37 2018-01-11 03:11:37 0:40:00 0:14:44 0:25:16 ovh master rados/upgrade/luminous-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-mimic.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} thrashosds-health.yaml} 3
Failure Reason:

Command failed on ovh057 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054962 2018-01-10 14:31:10 2018-01-11 02:33:31 2018-01-11 03:01:30 0:27:59 0:09:47 0:18:12 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh009 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054964 2018-01-10 14:31:11 2018-01-11 02:35:42 2018-01-11 02:51:41 0:15:59 0:10:47 0:05:12 ovh master rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh016 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054966 2018-01-10 14:31:12 2018-01-11 02:35:42 2018-01-11 03:03:41 0:27:59 0:10:13 0:17:46 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh081 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054968 2018-01-10 14:31:12 2018-01-11 02:35:42 2018-01-11 03:03:41 0:27:59 0:10:26 0:17:33 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh014 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054970 2018-01-10 14:31:13 2018-01-11 02:35:42 2018-01-11 03:01:41 0:25:59 0:10:08 0:15:51 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh086 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054972 2018-01-10 14:31:14 2018-01-11 02:37:30 2018-01-11 02:59:30 0:22:00 0:10:02 0:11:58 ovh master rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} 2
Failure Reason:

Command failed on ovh036 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054974 2018-01-10 14:31:14 2018-01-11 02:37:39 2018-01-11 02:55:38 0:17:59 0:10:39 0:07:20 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh060 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054976 2018-01-10 14:31:15 2018-01-11 02:39:30 2018-01-11 02:59:30 0:20:00 0:11:00 0:09:00 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh020 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054979 2018-01-10 14:31:16 2018-01-11 02:39:30 2018-01-11 02:53:30 0:14:00 0:10:09 0:03:51 ovh master rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh066 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054980 2018-01-10 14:31:16 2018-01-11 02:39:31 2018-01-11 03:01:31 0:22:00 0:11:04 0:10:56 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh052 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054982 2018-01-10 14:31:17 2018-01-11 02:39:31 2018-01-11 02:59:31 0:20:00 0:09:40 0:10:20 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on ovh087 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054984 2018-01-10 14:31:17 2018-01-11 02:39:36 2018-01-11 02:57:36 0:18:00 0:10:16 0:07:44 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on ovh059 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054986 2018-01-10 14:31:18 2018-01-11 02:41:24 2018-01-11 02:57:23 0:15:59 0:11:01 0:04:58 ovh master rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml workloads/fio_4K_rand_rw.yaml} 1
Failure Reason:

Command failed on ovh067 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054988 2018-01-10 14:31:19 2018-01-11 02:43:53 2018-01-11 03:19:50 0:35:57 0:11:43 0:24:14 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on ovh036 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054990 2018-01-10 14:31:19 2018-01-11 02:43:53 2018-01-11 03:15:51 0:31:58 0:10:08 0:21:50 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/scrub_test.yaml} 2
Failure Reason:

Command failed on ovh075 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054992 2018-01-10 14:31:20 2018-01-11 02:43:53 2018-01-11 02:59:50 0:15:57 0:09:48 0:06:09 ovh master rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh065 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054994 2018-01-10 14:31:21 2018-01-11 02:43:53 2018-01-11 03:21:51 0:37:58 0:10:52 0:27:06 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh018 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054996 2018-01-10 14:31:22 2018-01-11 02:43:53 2018-01-11 03:01:50 0:17:57 0:10:02 0:07:55 ovh master rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh012 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2054998 2018-01-10 14:31:22 2018-01-11 02:43:52 2018-01-11 03:01:50 0:17:58 0:10:28 0:07:30 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh040 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055000 2018-01-10 14:31:23 2018-01-11 02:43:53 2018-01-11 02:59:50 0:15:57 0:10:21 0:05:36 ovh master rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh023 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055003 2018-01-10 14:31:24 2018-01-11 02:45:28 2018-01-11 03:07:28 0:22:00 0:11:01 0:10:59 ovh master rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
Failure Reason:

Command failed on ovh098 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

pass 2055005 2018-01-10 14:31:24 2018-01-11 02:45:31 2018-01-11 03:37:31 0:52:00 0:44:28 0:07:32 ovh master centos 7.4 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore.yaml rados.yaml supported/centos_latest.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 2055007 2018-01-10 14:31:25 2018-01-11 02:45:42 2018-01-11 03:05:42 0:20:00 0:10:57 0:09:03 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

Command failed on ovh018 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055009 2018-01-10 14:31:26 2018-01-11 02:47:31 2018-01-11 03:09:31 0:22:00 0:10:50 0:11:10 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on ovh076 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055011 2018-01-10 14:31:26 2018-01-11 02:47:32 2018-01-11 03:19:31 0:31:59 0:10:59 0:21:00 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on ovh048 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055013 2018-01-10 14:31:27 2018-01-11 02:47:31 2018-01-11 03:03:31 0:16:00 ovh master rados/singleton/{all/random-eio.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 2
Failure Reason:

Could not reconnect to ubuntu@ovh006.front.sepia.ceph.com

fail 2055015 2018-01-10 14:31:27 2018-01-11 02:47:32 2018-01-11 03:19:31 0:31:59 0:11:09 0:20:50 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh063 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055017 2018-01-10 14:31:28 2018-01-11 02:47:31 2018-01-11 03:07:31 0:20:00 0:11:13 0:08:47 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055019 2018-01-10 14:31:29 2018-01-11 02:47:31 2018-01-11 03:03:31 0:16:00 0:09:42 0:06:18 ovh master rados/objectstore/ceph_objectstore_tool.yaml 1
Failure Reason:

Command failed on ovh096 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055021 2018-01-10 14:31:30 2018-01-11 02:47:32 2018-01-11 03:17:31 0:29:59 0:10:59 0:19:00 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh001 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055023 2018-01-10 14:31:31 2018-01-11 02:47:39 2018-01-11 03:19:39 0:32:00 0:10:53 0:21:07 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh040 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055025 2018-01-10 14:31:31 2018-01-11 02:47:40 2018-01-11 03:03:39 0:15:59 0:10:45 0:05:14 ovh master rados/perf/{ceph.yaml objectstore/bluestore.yaml openstack.yaml settings/optimized.yaml workloads/fio_4M_rand_read.yaml} 1
Failure Reason:

Command failed on ovh028 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055027 2018-01-10 14:31:32 2018-01-11 02:49:29 2018-01-11 03:15:28 0:25:59 0:11:18 0:14:41 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh094 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055029 2018-01-10 14:31:33 2018-01-11 02:49:29 2018-01-11 03:05:28 0:15:59 0:10:43 0:05:16 ovh master rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh083 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055031 2018-01-10 14:31:33 2018-01-11 02:49:29 2018-01-11 03:19:29 0:30:00 0:11:37 0:18:23 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh020 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055033 2018-01-10 14:31:34 2018-01-11 02:49:34 2018-01-11 03:13:34 0:24:00 0:10:42 0:13:18 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh038 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055035 2018-01-10 14:31:35 2018-01-11 02:51:39 2018-01-11 03:19:38 0:27:59 0:12:10 0:15:49 ovh master rados/multimon/{clusters/21.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_clock_no_skews.yaml} 3
Failure Reason:

Command failed on ovh060 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055037 2018-01-10 14:31:35 2018-01-11 02:51:42 2018-01-11 03:37:42 0:46:00 0:10:54 0:35:06 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh036 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055039 2018-01-10 14:31:36 2018-01-11 02:53:31 2018-01-11 03:29:31 0:36:00 0:10:25 0:25:35 ovh master rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
Failure Reason:

Command failed on ovh065 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055041 2018-01-10 14:31:36 2018-01-11 02:53:35 2018-01-11 03:15:34 0:21:59 0:11:17 0:10:42 ovh master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
Failure Reason:

Command failed on ovh019 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055043 2018-01-10 14:31:37 2018-01-11 02:55:40 2018-01-11 03:25:40 0:30:00 0:11:29 0:18:31 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh058 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055045 2018-01-10 14:31:38 2018-01-11 02:57:25 2018-01-11 03:15:24 0:17:59 0:09:57 0:08:02 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on ovh087 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055047 2018-01-10 14:31:38 2018-01-11 02:57:33 2018-01-11 03:13:32 0:15:59 0:10:13 0:05:46 ovh master rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh097 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055049 2018-01-10 14:31:39 2018-01-11 02:57:34 2018-01-11 03:21:34 0:24:00 0:10:42 0:13:18 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on ovh010 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055051 2018-01-10 14:31:39 2018-01-11 02:57:37 2018-01-11 03:13:36 0:15:59 0:09:57 0:06:02 ovh master rados/singleton/{all/recovery-preemption.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh065 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055053 2018-01-10 14:31:40 2018-01-11 02:59:31 2018-01-11 03:27:31 0:28:00 0:12:22 0:15:38 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on ovh070 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055055 2018-01-10 14:31:41 2018-01-11 02:59:31 2018-01-11 03:19:31 0:20:00 0:10:04 0:09:56 ovh master rados/standalone/erasure-code.yaml 1
Failure Reason:

Command failed on ovh016 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055057 2018-01-10 14:31:41 2018-01-11 02:59:32 2018-01-11 03:43:32 0:44:00 0:11:06 0:32:54 ovh master rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command failed on ovh052 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055059 2018-01-10 14:31:42 2018-01-11 02:59:51 2018-01-11 03:19:51 0:20:00 0:11:08 0:08:52 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh046 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055061 2018-01-10 14:31:43 2018-01-11 02:59:52 2018-01-11 03:35:51 0:35:59 0:10:40 0:25:19 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh046 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055063 2018-01-10 14:31:43 2018-01-11 03:01:32 2018-01-11 03:23:31 0:21:59 0:10:46 0:11:13 ovh master rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} 2
Failure Reason:

Command failed on ovh064 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055065 2018-01-10 14:31:44 2018-01-11 03:01:32 2018-01-11 03:33:32 0:32:00 0:10:35 0:21:25 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

Command failed on ovh019 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055067 2018-01-10 14:31:45 2018-01-11 03:01:42 2018-01-11 03:23:42 0:22:00 0:10:25 0:11:35 ovh master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml tasks/failover.yaml} 2
Failure Reason:

Command failed on ovh003 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055069 2018-01-10 14:31:45 2018-01-11 03:01:51 2018-01-11 03:15:51 0:14:00 0:09:34 0:04:26 ovh master rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml workloads/fio_4M_rand_rw.yaml} 1
Failure Reason:

Command failed on ovh096 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055071 2018-01-10 14:31:46 2018-01-11 03:01:51 2018-01-11 03:21:51 0:20:00 0:12:00 0:08:00 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on ovh031 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055073 2018-01-10 14:31:46 2018-01-11 03:03:43 2018-01-11 03:17:42 0:13:59 0:10:21 0:03:38 ovh master rados/singleton/{all/reg11184.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh081 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055075 2018-01-10 14:31:47 2018-01-11 03:03:43 2018-01-11 03:33:42 0:29:59 0:11:17 0:18:42 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on ovh023 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055077 2018-01-10 14:31:48 2018-01-11 03:03:43 2018-01-11 03:39:43 0:36:00 0:11:19 0:24:41 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh006 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055079 2018-01-10 14:31:48 2018-01-11 03:03:43 2018-01-11 03:51:43 0:48:00 0:10:48 0:37:12 ovh master rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/many.yaml workloads/rados_mon_workunits.yaml} 2
Failure Reason:

Command failed on ovh061 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055081 2018-01-10 14:31:49 2018-01-11 03:03:43 2018-01-11 03:31:42 0:27:59 0:10:09 0:17:50 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh096 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055083 2018-01-10 14:31:50 2018-01-11 03:05:31 2018-01-11 03:29:30 0:23:59 0:12:24 0:11:35 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml} 2
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055085 2018-01-10 14:31:50 2018-01-11 03:05:35 2018-01-11 03:39:35 0:34:00 0:10:28 0:23:32 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh008 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055087 2018-01-10 14:31:51 2018-01-11 03:05:43 2018-01-11 03:21:42 0:15:59 0:10:59 0:05:00 ovh master rados/objectstore/filejournal.yaml 1
Failure Reason:

Command failed on ovh074 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055089 2018-01-10 14:31:52 2018-01-11 03:07:30 2018-01-11 03:37:29 0:29:59 0:11:43 0:18:16 ovh master rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 2
Failure Reason:

Command failed on ovh014 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055091 2018-01-10 14:31:53 2018-01-11 03:07:32 2018-01-11 03:29:31 0:21:59 0:11:29 0:10:30 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh015 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055093 2018-01-10 14:31:53 2018-01-11 03:07:49 2018-01-11 03:29:47 0:21:58 0:11:59 0:09:59 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh057 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055095 2018-01-10 14:31:54 2018-01-11 03:09:34 2018-01-11 03:25:32 0:15:58 0:11:31 0:04:27 ovh master rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh095 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055097 2018-01-10 14:31:54 2018-01-11 03:11:39 2018-01-11 03:29:38 0:17:59 0:10:03 0:07:56 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh097 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055099 2018-01-10 14:31:55 2018-01-11 03:11:40 2018-01-11 05:01:42 1:50:02 0:11:06 1:38:56 ovh master rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

Command failed on ovh009 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055101 2018-01-10 14:31:56 2018-01-11 03:13:34 2018-01-11 03:31:33 0:17:59 0:10:48 0:07:11 ovh master ubuntu 16.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on ovh066 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055103 2018-01-10 14:31:56 2018-01-11 03:13:35 2018-01-11 03:35:35 0:22:00 0:10:37 0:11:23 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh069 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055105 2018-01-10 14:31:57 2018-01-11 03:13:37 2018-01-11 03:35:37 0:22:00 0:10:11 0:11:49 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh060 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055107 2018-01-10 14:31:58 2018-01-11 03:15:26 2018-01-11 03:33:25 0:17:59 0:10:05 0:07:54 ovh master rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml workloads/fio_4M_rand_write.yaml} 1
Failure Reason:

Command failed on ovh016 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055109 2018-01-10 14:31:58 2018-01-11 03:15:29 2018-01-11 03:29:29 0:14:00 0:10:07 0:03:53 ovh master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh086 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055111 2018-01-10 14:31:59 2018-01-11 03:15:35 2018-01-11 03:31:35 0:16:00 0:10:15 0:05:45 ovh master rados/singleton/{all/rest-api.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh081 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055113 2018-01-10 14:31:59 2018-01-11 03:15:52 2018-01-11 03:31:51 0:15:59 0:09:43 0:06:16 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on ovh001 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055115 2018-01-10 14:32:00 2018-01-11 03:15:52 2018-01-11 03:37:52 0:22:00 0:10:47 0:11:13 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on ovh020 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055117 2018-01-10 14:32:01 2018-01-11 03:17:50 2018-01-11 03:45:48 0:27:58 0:10:47 0:17:11 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on ovh057 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055119 2018-01-10 14:32:01 2018-01-11 03:17:50 2018-01-11 03:53:49 0:35:59 0:10:48 0:25:11 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh046 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055121 2018-01-10 14:32:02 2018-01-11 03:19:30 2018-01-11 03:45:30 0:26:00 0:10:21 0:15:39 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh098 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055123 2018-01-10 14:32:03 2018-01-11 03:19:32 2018-01-11 04:07:32 0:48:00 0:10:50 0:37:10 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_python.yaml} 2
Failure Reason:

Command failed on ovh081 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055125 2018-01-10 14:32:03 2018-01-11 03:19:32 2018-01-11 03:33:32 0:14:00 0:09:58 0:04:02 ovh master rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh040 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055127 2018-01-10 14:32:04 2018-01-11 03:19:32 2018-01-11 03:39:32 0:20:00 0:10:32 0:09:28 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

Command failed on ovh055 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055129 2018-01-10 14:32:04 2018-01-11 03:19:40 2018-01-11 03:41:39 0:21:59 0:10:40 0:11:19 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on ovh085 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055131 2018-01-10 14:32:05 2018-01-11 03:19:40 2018-01-11 03:53:40 0:34:00 0:10:42 0:23:18 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on ovh069 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055133 2018-01-10 14:32:06 2018-01-11 03:19:51 2018-01-11 03:39:51 0:20:00 0:10:45 0:09:15 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh010 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055135 2018-01-10 14:32:06 2018-01-11 03:19:52 2018-01-11 03:35:52 0:16:00 0:10:14 0:05:46 ovh master rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh061 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055137 2018-01-10 14:32:07 2018-01-11 03:21:36 2018-01-11 03:35:35 0:13:59 0:10:38 0:03:21 ovh master rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml workloads/radosbench_4K_rand_read.yaml} 1
Failure Reason:

Command failed on ovh064 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055139 2018-01-10 14:32:08 2018-01-11 03:21:43 2018-01-11 03:43:43 0:22:00 0:10:40 0:11:20 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh003 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055141 2018-01-10 14:32:09 2018-01-11 03:21:52 2018-01-11 03:47:52 0:26:00 0:10:47 0:15:13 ovh master rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} 2
Failure Reason:

Command failed on ovh014 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055143 2018-01-10 14:32:09 2018-01-11 03:21:53 2018-01-11 03:49:52 0:27:59 0:11:08 0:16:51 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh086 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055145 2018-01-10 14:32:10 2018-01-11 03:23:33 2018-01-11 04:25:34 1:02:01 0:10:47 0:51:14 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh051 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055147 2018-01-10 14:32:11 2018-01-11 03:23:43 2018-01-11 03:53:42 0:29:59 0:11:18 0:18:41 ovh master rados/multimon/{clusters/21.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_clock_with_skews.yaml} 3
Failure Reason:

Command failed on ovh067 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055149 2018-01-10 14:32:11 2018-01-11 03:25:34 2018-01-11 03:41:33 0:15:59 0:10:07 0:05:52 ovh master rados/objectstore/filestore-idempotent-aio-journal.yaml 1
Failure Reason:

Command failed on ovh090 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055151 2018-01-10 14:32:12 2018-01-11 03:25:41 2018-01-11 03:41:40 0:15:59 0:09:30 0:06:29 ovh master rados/standalone/misc.yaml 1
Failure Reason:

Command failed on ovh065 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055153 2018-01-10 14:32:12 2018-01-11 03:27:33 2018-01-11 03:45:32 0:17:59 0:10:38 0:07:21 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh074 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055155 2018-01-10 14:32:13 2018-01-11 03:29:30 2018-01-11 03:47:30 0:18:00 0:10:39 0:07:21 ovh master rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} 2
Failure Reason:

Command failed on ovh097 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055157 2018-01-10 14:32:14 2018-01-11 03:29:32 2018-01-11 03:49:31 0:19:59 0:10:42 0:09:17 ovh master rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
Failure Reason:

Command failed on ovh001 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055159 2018-01-10 14:32:14 2018-01-11 03:29:33 2018-01-11 04:09:33 0:40:00 0:11:56 0:28:04 ovh master rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055161 2018-01-10 14:32:15 2018-01-11 03:29:33 2018-01-11 05:17:34 1:48:01 0:48:22 0:59:39 ovh master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

saw valgrind issues

fail 2055163 2018-01-10 14:32:16 2018-01-11 03:29:39 2018-01-11 03:51:39 0:22:00 0:11:30 0:10:30 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh094 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055165 2018-01-10 14:32:16 2018-01-11 03:29:48 2018-01-11 03:51:48 0:22:00 0:11:22 0:10:38 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh059 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055167 2018-01-10 14:32:17 2018-01-11 03:31:35 2018-01-11 03:59:34 0:27:59 0:10:51 0:17:08 ovh master rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 2
Failure Reason:

Command failed on ovh095 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055169 2018-01-10 14:32:17 2018-01-11 03:31:36 2018-01-11 03:53:35 0:21:59 0:10:47 0:11:12 ovh master rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/one.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh032 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055171 2018-01-10 14:32:18 2018-01-11 03:31:44 2018-01-11 03:49:43 0:17:59 0:10:27 0:07:32 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh016 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055173 2018-01-10 14:32:19 2018-01-11 03:31:52 2018-01-11 03:51:52 0:20:00 0:10:36 0:09:24 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_stress_watch.yaml} 2
Failure Reason:

Command failed on ovh087 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055175 2018-01-10 14:32:19 2018-01-11 03:33:27 2018-01-11 03:51:26 0:17:59 0:09:47 0:08:12 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on ovh036 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055177 2018-01-10 14:32:20 2018-01-11 03:33:33 2018-01-11 03:57:32 0:23:59 0:10:42 0:13:17 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on ovh008 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055179 2018-01-10 14:32:21 2018-01-11 03:33:33 2018-01-11 04:09:33 0:36:00 0:10:38 0:25:22 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on ovh036 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055181 2018-01-10 14:32:21 2018-01-11 03:33:44 2018-01-11 03:53:43 0:19:59 0:10:29 0:09:30 ovh master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore.yaml tasks/module_selftest.yaml} 2
Failure Reason:

Command failed on ovh060 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055183 2018-01-10 14:32:22 2018-01-11 03:35:36 2018-01-11 03:51:36 0:16:00 0:10:00 0:06:00 ovh master rados/perf/{ceph.yaml objectstore/bluestore.yaml openstack.yaml settings/optimized.yaml workloads/radosbench_4K_seq_read.yaml} 1
Failure Reason:

Command failed on ovh051 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055185 2018-01-10 14:32:23 2018-01-11 03:35:36 2018-01-11 03:55:36 0:20:00 0:10:37 0:09:23 ovh master rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} 2
Failure Reason:

Command failed on ovh006 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055186 2018-01-10 14:32:23 2018-01-11 03:35:38 2018-01-11 03:55:38 0:20:00 0:11:08 0:08:52 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh055 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055188 2018-01-10 14:32:24 2018-01-11 03:35:53 2018-01-11 03:51:52 0:15:59 0:10:34 0:05:25 ovh master rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh020 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055190 2018-01-10 14:32:25 2018-01-11 03:35:53 2018-01-11 03:57:52 0:21:59 0:10:58 0:11:01 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh083 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055192 2018-01-10 14:32:25 2018-01-11 03:37:44 2018-01-11 04:15:43 0:37:59 0:11:25 0:26:34 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

Command failed on ovh055 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055194 2018-01-10 14:32:26 2018-01-11 03:37:44 2018-01-11 04:09:43 0:31:59 0:10:48 0:21:11 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on ovh061 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055196 2018-01-10 14:32:27 2018-01-11 03:37:44 2018-01-11 04:25:44 0:48:00 0:10:38 0:37:22 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055198 2018-01-10 14:32:27 2018-01-11 03:37:53 2018-01-11 05:19:54 1:42:01 0:11:51 1:30:10 ovh master rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
Failure Reason:

Command failed on ovh094 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

pass 2055200 2018-01-10 14:32:28 2018-01-11 03:39:33 2018-01-11 04:59:34 1:20:01 0:41:11 0:38:50 ovh master centos 7.4 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported/centos_latest.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 2055202 2018-01-10 14:32:29 2018-01-11 03:39:36 2018-01-11 03:55:35 0:15:59 0:11:05 0:04:54 ovh master rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh031 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055204 2018-01-10 14:32:29 2018-01-11 03:39:44 2018-01-11 04:07:44 0:28:00 0:10:50 0:17:10 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh020 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055206 2018-01-10 14:32:30 2018-01-11 03:39:52 2018-01-11 03:59:52 0:20:00 0:11:20 0:08:40 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh085 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055208 2018-01-10 14:32:30 2018-01-11 03:41:35 2018-01-11 04:17:35 0:36:00 0:12:30 0:23:30 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh058 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055210 2018-01-10 14:32:31 2018-01-11 03:41:41 2018-01-11 03:59:40 0:17:59 0:10:48 0:07:11 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_striper.yaml} 2
Failure Reason:

Command failed on ovh089 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055212 2018-01-10 14:32:32 2018-01-11 03:41:42 2018-01-11 03:59:41 0:17:59 0:11:14 0:06:45 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh012 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055214 2018-01-10 14:32:32 2018-01-11 03:43:34 2018-01-11 03:59:33 0:15:59 0:10:07 0:05:52 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh098 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055216 2018-01-10 14:32:33 2018-01-11 03:43:46 2018-01-11 04:01:44 0:17:58 0:10:46 0:07:12 ovh master rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml workloads/radosbench_4M_rand_read.yaml} 1
Failure Reason:

Command failed on ovh044 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055218 2018-01-10 14:32:34 2018-01-11 03:45:32 2018-01-11 04:01:31 0:15:59 0:10:50 0:05:09 ovh master rados/singleton/{all/admin-socket.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh038 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055220 2018-01-10 14:32:34 2018-01-11 03:45:34 2018-01-11 04:05:33 0:19:59 0:10:01 0:09:58 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh087 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055222 2018-01-10 14:32:35 2018-01-11 03:45:49 2018-01-11 04:01:49 0:16:00 0:11:18 0:04:42 ovh master rados/objectstore/filestore-idempotent.yaml 1
Failure Reason:

Command failed on ovh014 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055224 2018-01-10 14:32:36 2018-01-11 03:47:39 2018-01-11 04:05:39 0:18:00 0:11:04 0:06:56 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh097 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055226 2018-01-10 14:32:36 2018-01-11 03:47:53 2018-01-11 04:09:52 0:21:59 0:10:39 0:11:20 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh066 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055228 2018-01-10 14:32:37 2018-01-11 03:49:33 2018-01-11 04:03:32 0:13:59 0:10:10 0:03:49 ovh master rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh086 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055230 2018-01-10 14:32:38 2018-01-11 03:49:44 2018-01-11 04:07:44 0:18:00 0:09:36 0:08:24 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on ovh023 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055232 2018-01-10 14:32:38 2018-01-11 03:49:54 2018-01-11 04:07:53 0:17:59 0:11:35 0:06:24 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on ovh016 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055234 2018-01-10 14:32:39 2018-01-11 03:51:28 2018-01-11 04:05:27 0:13:59 0:09:32 0:04:27 ovh master rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh067 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055236 2018-01-10 14:32:40 2018-01-11 03:51:37 2018-01-11 04:19:37 0:28:00 0:11:01 0:16:59 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on ovh095 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055238 2018-01-10 14:32:40 2018-01-11 03:51:40 2018-01-11 04:09:39 0:17:59 0:10:57 0:07:02 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh075 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055240 2018-01-10 14:32:41 2018-01-11 03:51:44 2018-01-11 04:09:44 0:18:00 0:11:18 0:06:42 ovh master rados/monthrash/{ceph.yaml clusters/3-mons.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/one.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

Command failed on ovh046 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055242 2018-01-10 14:32:42 2018-01-11 03:52:00 2018-01-11 04:18:00 0:26:00 0:12:21 0:13:39 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh010 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055244 2018-01-10 14:32:43 2018-01-11 03:52:00 2018-01-11 04:11:59 0:19:59 0:11:11 0:08:48 ovh master rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
Failure Reason:

Command failed on ovh063 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055246 2018-01-10 14:32:43 2018-01-11 03:52:00 2018-01-11 04:18:00 0:26:00 0:10:02 0:15:58 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

Command failed on ovh002 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055248 2018-01-10 14:32:44 2018-01-11 03:53:37 2018-01-11 04:09:36 0:15:59 0:10:45 0:05:14 ovh master rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml workloads/radosbench_4M_seq_read.yaml} 1
Failure Reason:

Command failed on ovh006 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055250 2018-01-10 14:32:45 2018-01-11 03:53:42 2018-01-11 04:15:41 0:21:59 0:12:01 0:09:58 ovh master rados/rest/rest_test.yaml 2
Failure Reason:

Command failed on ovh028 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055252 2018-01-10 14:32:45 2018-01-11 03:53:44 2018-01-11 04:09:44 0:16:00 0:11:30 0:04:30 ovh master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh031 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055254 2018-01-10 14:32:46 2018-01-11 03:53:45 2018-01-11 04:09:44 0:15:59 0:10:13 0:05:46 ovh master rados/standalone/mon.yaml 1
Failure Reason:

Command failed on ovh026 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055256 2018-01-10 14:32:46 2018-01-11 03:53:50 2018-01-11 05:15:51 1:22:01 0:17:15 1:04:46 ovh master rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

{'ovh070.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', 'E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?'], 'changed': False, '_ansible_no_log': False, 'stdout': "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8\nSuggested packages:\n krb5-doc\nThe following NEW packages will be installed:\n krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9\n libkdb5-8\n0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.\nNeed to get 302 kB of archives.\nAfter this operation, 1296 kB of additional disk space will be used.\nErr:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nErr:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nGet:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2 [36.7 kB]\nGet:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2 [37.1 kB]\nGet:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]\nGet:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]\nFetched 224 kB in 28s (7856 B/s)\n", 'cache_updated': False, 'invocation': {'module_args': {'autoremove': False, 'force': False, 'force_apt_get': False, 'update_cache': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0, 'dpkg_options': 'force-confdef,force-confold', 'upgrade': None, 'name': 'krb5-user', 'package': ['krb5-user'], 'autoclean': False, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'default_release': None, 'install_recommends': None}}, 'failed': True, 'stderr': "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", 'rc': 100, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'krb5-user\'\' failed: E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n', 'stdout_lines': ['Reading package lists...', 'Building dependency tree...', 'Reading state information...', 'The following additional packages will be installed:', ' krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8', 'Suggested packages:', ' krb5-doc', 'The following NEW packages will be installed:', ' krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9', ' libkdb5-8', '0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.', 'Need to get 302 kB of archives.', 'After this operation, 1296 kB of additional disk space will be used.', 'Err:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Err:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2 [36.7 kB]', 'Get:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2 [37.1 kB]', 'Get:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]', 'Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]', 'Fetched 224 kB in 28s (7856 B/s)'], 'cache_update_time': 1515645913}}

fail 2055258 2018-01-10 14:32:47 2018-01-11 03:55:37 2018-01-11 04:17:36 0:21:59 0:10:18 0:11:41 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on ovh098 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055260 2018-01-10 14:32:48 2018-01-11 03:55:38 2018-01-11 04:15:37 0:19:59 0:12:17 0:07:42 ovh master rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh003 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055262 2018-01-10 14:32:48 2018-01-11 03:55:39 2018-01-11 04:17:38 0:21:59 0:12:09 0:09:50 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_workunit_loadgen_big.yaml} 2
Failure Reason:

Command failed on ovh089 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055264 2018-01-10 14:32:49 2018-01-11 03:57:35 2018-01-11 04:25:34 0:27:59 0:10:54 0:17:05 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on ovh023 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055266 2018-01-10 14:32:50 2018-01-11 03:57:55 2018-01-11 04:17:53 0:19:58 0:11:26 0:08:32 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh048 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055268 2018-01-10 14:32:50 2018-01-11 03:59:35 2018-01-11 04:27:35 0:28:00 0:10:51 0:17:09 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh036 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055270 2018-01-10 14:32:51 2018-01-11 03:59:36 2018-01-11 04:27:35 0:27:59 0:11:19 0:16:40 ovh master rados/multimon/{clusters/3.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml} 2
Failure Reason:

Command failed on ovh020 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055272 2018-01-10 14:32:52 2018-01-11 03:59:41 2018-01-11 04:19:41 0:20:00 0:10:32 0:09:28 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh044 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055274 2018-01-10 14:32:52 2018-01-11 03:59:43 2018-01-11 04:23:42 0:23:59 0:11:03 0:12:56 ovh master rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
Failure Reason:

Command failed on ovh087 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055276 2018-01-10 14:32:53 2018-01-11 03:59:53 2018-01-11 04:25:52 0:25:59 0:10:34 0:15:25 ovh master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} 2
Failure Reason:

Command failed on ovh001 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055278 2018-01-10 14:32:54 2018-01-11 04:01:33 2018-01-11 04:17:32 0:15:59 0:10:51 0:05:08 ovh master rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh086 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055280 2018-01-10 14:32:54 2018-01-11 04:01:45 2018-01-11 04:25:44 0:23:59 0:10:28 0:13:31 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh046 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055282 2018-01-10 14:32:55 2018-01-11 04:01:50 2018-01-11 04:19:50 0:18:00 0:10:11 0:07:49 ovh master rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh067 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055284 2018-01-10 14:32:55 2018-01-11 04:03:34 2018-01-11 04:21:33 0:17:59 0:10:22 0:07:37 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh014 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055286 2018-01-10 14:32:56 2018-01-11 04:05:29 2018-01-11 04:59:29 0:54:00 0:11:43 0:42:17 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh075 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055288 2018-01-10 14:32:57 2018-01-11 04:05:34 2018-01-11 04:27:34 0:22:00 0:10:48 0:11:12 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh031 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055290 2018-01-10 14:32:57 2018-01-11 04:05:40 2018-01-11 04:21:40 0:16:00 0:09:35 0:06:25 ovh master rados/objectstore/fusestore.yaml 1
Failure Reason:

Command failed on ovh057 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055292 2018-01-10 14:32:58 2018-01-11 04:07:34 2018-01-11 04:27:34 0:20:00 0:10:53 0:09:07 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh096 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055294 2018-01-10 14:32:59 2018-01-11 04:07:53 2018-01-11 04:27:52 0:19:59 0:11:12 0:08:47 ovh master rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml tasks/workunits.yaml} 2
Failure Reason:

Command failed on ovh032 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055296 2018-01-10 14:32:59 2018-01-11 04:07:53 2018-01-11 04:25:52 0:17:59 0:11:02 0:06:57 ovh master rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml workloads/radosbench_4M_write.yaml} 1
Failure Reason:

Command failed on ovh026 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055298 2018-01-10 14:33:00 2018-01-11 04:07:54 2018-01-11 04:21:53 0:13:59 0:10:11 0:03:48 ovh master rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh059 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055300 2018-01-10 14:33:01 2018-01-11 04:09:34 2018-01-11 04:41:34 0:32:00 0:09:49 0:22:11 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on ovh076 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055302 2018-01-10 14:33:01 2018-01-11 04:09:34 2018-01-11 04:51:34 0:42:00 0:11:25 0:30:35 ovh master rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

{'ovh031.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'archive.ubuntu.com'", '', "E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'archive.ubuntu.com'", '', "E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkadm5clnt-mit9_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'archive.ubuntu.com'", '', 'E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?'], 'changed': False, '_ansible_no_log': False, 'stdout': "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8\nSuggested packages:\n krb5-doc\nThe following NEW packages will be installed:\n krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9\n libkdb5-8\n0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.\nNeed to get 302 kB of archives.\nAfter this operation, 1296 kB of additional disk space will be used.\nErr:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3\n Temporary failure resolving 'archive.ubuntu.com'\nErr:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2\n Temporary failure resolving 'archive.ubuntu.com'\nErr:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2\n Temporary failure resolving 'archive.ubuntu.com'\nGet:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2 [37.1 kB]\nGet:5 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]\nGet:6 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]\nFetched 187 kB in 56s (3283 B/s)\n", 'cache_updated': False, 'invocation': {'module_args': {'autoremove': False, 'force': False, 'force_apt_get': False, 'update_cache': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0, 'dpkg_options': 'force-confdef,force-confold', 'upgrade': None, 'name': 'krb5-user', 'package': ['krb5-user'], 'autoclean': False, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'default_release': None, 'install_recommends': None}}, 'failed': True, 'stderr': "E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'archive.ubuntu.com'\n\nE: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'archive.ubuntu.com'\n\nE: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkadm5clnt-mit9_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'archive.ubuntu.com'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", 'rc': 100, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'krb5-user\'\' failed: E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving \'archive.ubuntu.com\'\n\nE: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving \'archive.ubuntu.com\'\n\nE: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkadm5clnt-mit9_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving \'archive.ubuntu.com\'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n', 'stdout_lines': ['Reading package lists...', 'Building dependency tree...', 'Reading state information...', 'The following additional packages will be installed:', ' krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8', 'Suggested packages:', ' krb5-doc', 'The following NEW packages will be installed:', ' krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9', ' libkdb5-8', '0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.', 'Need to get 302 kB of archives.', 'After this operation, 1296 kB of additional disk space will be used.', 'Err:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3', " Temporary failure resolving 'archive.ubuntu.com'", 'Err:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2', " Temporary failure resolving 'archive.ubuntu.com'", 'Err:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2', " Temporary failure resolving 'archive.ubuntu.com'", 'Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2 [37.1 kB]', 'Get:5 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]', 'Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]', 'Fetched 187 kB in 56s (3283 B/s)'], 'cache_update_time': 1515646068}, 'ovh036.front.sepia.ceph.com': {'msg': 'All items completed', 'failed': True, 'changed': False}}

fail 2055304 2018-01-10 14:33:02 2018-01-11 04:09:37 2018-01-11 05:07:38 0:58:01 0:13:56 0:44:05 ovh master ubuntu 16.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} leveldb.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on ovh055 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055306 2018-01-10 14:33:02 2018-01-11 04:09:41 2018-01-11 05:07:41 0:58:00 0:14:41 0:43:19 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

{'ovh003.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/a/autogen/libopts25_5.18.7-3_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/n/ntp/ntp_4.2.8p4+dfsg-3ubuntu5.7_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', 'E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?'], 'changed': False, '_ansible_no_log': False, 'stdout': "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n libopts25\nSuggested packages:\n ntp-doc\nThe following NEW packages will be installed:\n libopts25 ntp\n0 upgraded, 2 newly installed, 0 to remove and 4 not upgraded.\nNeed to get 576 kB of archives.\nAfter this operation, 1792 kB of additional disk space will be used.\nErr:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 libopts25 amd64 1:5.18.7-3\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nErr:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ntp amd64 1:4.2.8p4+dfsg-3ubuntu5.7\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n", 'cache_updated': False, 'invocation': {'module_args': {'autoremove': False, 'force': False, 'force_apt_get': False, 'update_cache': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0, 'dpkg_options': 'force-confdef,force-confold', 'upgrade': None, 'name': 'ntp', 'package': ['ntp'], 'autoclean': False, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'default_release': None, 'install_recommends': None}}, 'failed': True, 'stderr': "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/a/autogen/libopts25_5.18.7-3_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/n/ntp/ntp_4.2.8p4+dfsg-3ubuntu5.7_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", 'rc': 100, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'ntp\'\' failed: E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/a/autogen/libopts25_5.18.7-3_amd64.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/n/ntp/ntp_4.2.8p4+dfsg-3ubuntu5.7_amd64.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n', 'stdout_lines': ['Reading package lists...', 'Building dependency tree...', 'Reading state information...', 'The following additional packages will be installed:', ' libopts25', 'Suggested packages:', ' ntp-doc', 'The following NEW packages will be installed:', ' libopts25 ntp', '0 upgraded, 2 newly installed, 0 to remove and 4 not upgraded.', 'Need to get 576 kB of archives.', 'After this operation, 1792 kB of additional disk space will be used.', 'Err:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 libopts25 amd64 1:5.18.7-3', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Err:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 ntp amd64 1:4.2.8p4+dfsg-3ubuntu5.7', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'"], 'cache_update_time': 1515644736}}

fail 2055308 2018-01-10 14:33:03 2018-01-11 04:09:44 2018-01-11 04:59:45 0:50:01 0:11:05 0:38:56 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_workunit_loadgen_mix.yaml} 2
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055310 2018-01-10 14:33:04 2018-01-11 04:09:45 2018-01-11 05:07:45 0:58:00 0:12:25 0:45:35 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on ovh006 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055312 2018-01-10 14:33:04 2018-01-11 04:09:45 2018-01-11 04:59:45 0:50:00 0:11:07 0:38:53 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on ovh058 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055314 2018-01-10 14:33:05 2018-01-11 04:09:45 2018-01-11 04:57:45 0:48:00 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on ovh059 with status 100: 'sudo apt-get -y install linux-image-generic'

fail 2055316 2018-01-10 14:33:06 2018-01-11 04:09:53 2018-01-11 04:27:53 0:18:00 0:10:18 0:07:42 ovh master rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-bitmap.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh018 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055318 2018-01-10 14:33:06 2018-01-11 04:12:01 2018-01-11 05:08:02 0:56:01 0:10:33 0:45:28 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
Failure Reason:

{'ovh044.front.sepia.ceph.com': {'msg': 'All items completed', 'failed': True, 'changed': True}}

fail 2055320 2018-01-10 14:33:07 2018-01-11 04:15:39 2018-01-11 05:19:39 1:04:00 0:11:03 0:52:57 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

{'ovh020.front.sepia.ceph.com': {'msg': 'All items completed', 'failed': True, 'changed': False}}

fail 2055322 2018-01-10 14:33:08 2018-01-11 04:15:43 2018-01-11 05:05:42 0:49:59 0:09:58 0:40:01 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

{'ovh083.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/perl/perl-doc_5.22.1-9ubuntu0.2_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', 'E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?'], 'changed': False, '_ansible_no_log': False, 'stdout': "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following NEW packages will be installed:\n perl-doc\n0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded.\nNeed to get 7390 kB of archives.\nAfter this operation, 14.1 MB of additional disk space will be used.\nErr:1 http://security.ubuntu.com/ubuntu xenial-security/main amd64 perl-doc all 5.22.1-9ubuntu0.2\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nErr:1 http://security.ubuntu.com/ubuntu xenial-security/main amd64 perl-doc all 5.22.1-9ubuntu0.2\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n", 'cache_updated': False, 'invocation': {'module_args': {'autoremove': False, 'force': False, 'force_apt_get': False, 'update_cache': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0, 'dpkg_options': 'force-confdef,force-confold', 'upgrade': None, 'name': 'perl-doc', 'package': ['perl-doc'], 'autoclean': False, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'default_release': None, 'install_recommends': None}}, 'failed': True, 'stderr': "E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/perl/perl-doc_5.22.1-9ubuntu0.2_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", 'rc': 100, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'perl-doc\'\' failed: E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/p/perl/perl-doc_5.22.1-9ubuntu0.2_all.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n', 'stdout_lines': ['Reading package lists...', 'Building dependency tree...', 'Reading state information...', 'The following NEW packages will be installed:', ' perl-doc', '0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded.', 'Need to get 7390 kB of archives.', 'After this operation, 14.1 MB of additional disk space will be used.', 'Err:1 http://security.ubuntu.com/ubuntu xenial-security/main amd64 perl-doc all 5.22.1-9ubuntu0.2', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Err:1 http://security.ubuntu.com/ubuntu xenial-security/main amd64 perl-doc all 5.22.1-9ubuntu0.2', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'"], 'cache_update_time': 1515644800}}

fail 2055323 2018-01-10 14:33:08 2018-01-11 04:15:45 2018-01-11 04:29:44 0:13:59 0:08:33 0:05:26 ovh master rados/singleton-nomsgr/{all/msgr.yaml rados.yaml} 1
Failure Reason:

{'ovh086.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['--2018-01-11 04:27:23-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Name or service not known.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 04:27:48-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130', 'Connecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.', 'HTTP request sent, awaiting response... 200 OK', 'Length: 70480 (69K)', u'Saving to: \u2018/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS.tmp6535\u2019', '', ' 0K .......... .......... .......... .......... .......... 72% 1.10M 0s', ' 50K .......... ........ 100% 862K=0.07s', '', u'2018-01-11 04:27:48 (1.01 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS.tmp6535\u2019 saved [70480/70480]', '', 'Warning: prerequisite Class::Accessor::Fast 0 not found.', 'Warning: prerequisite Digest::HMAC_SHA1 0 not found.', 'Warning: prerequisite Digest::MD5::File 0 not found.', 'Warning: prerequisite LWP::UserAgent::Determined 0 not found.', 'Warning: prerequisite XML::Simple 1.08 not found.', '--2018-01-11 04:27:58-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', 'Warning: prerequisite XML::NamespaceSupport 1.04 not found.', 'Warning: prerequisite XML::SAX 0.15 not found.', 'Warning: prerequisite XML::SAX::Expat 0 not found.', '--2018-01-11 04:28:14-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 04:28:25-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 04:28:38-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', '--2018-01-11 04:28:38-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'Resolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.', u'wget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019'], 'cmd': ['cpan', 'Amazon::S3'], 'end': '2018-01-11 04:28:49.815652', '_ansible_no_log': False, 'stdout': 'Loading internal null logger. Install Log::Log4perl for logging messages\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz\nReading \'/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz\'\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz\nReading \'/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz\'\n Database was generated on Fri, 12 Feb 2016 02:17:02 GMT\nWarning: This index file is 699 days old.\n Please check the host you chose as your CPAN mirror for staleness.\n I\'ll continue but problems seem likely to happen.\x07\n............................................................................DONE\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz\nReading \'/home/ubuntu/.cpan/sources/modules/03modlist.data.gz\'\nDONE\nWriting /home/ubuntu/.cpan/Metadata\nRunning install for module \'Amazon::S3\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6535"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6535 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6535\'. Giving up on it.\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS.gz\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS.tmp6535"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz ok\n\'YAML\' not installed, will not store persistent state\nConfiguring T/TI/TIMA/Amazon-S3-0.45.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for Amazon::S3\nWriting MYMETA.yml and MYMETA.json\n TIMA/Amazon-S3-0.45.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for T/TI/TIMA/Amazon-S3-0.45.tar.gz\n---- Unsatisfied dependencies detected during ----\n---- TIMA/Amazon-S3-0.45.tar.gz ----\n Digest::MD5::File [requires]\n Digest::HMAC_SHA1 [requires]\n XML::Simple [requires]\n Class::Accessor::Fast [requires]\n LWP::UserAgent::Determined [requires]\nRunning install for module \'Digest::MD5::File\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz ok\nConfiguring D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for Digest::MD5::File\nWriting MYMETA.yml and MYMETA.json\n DMUEY/Digest-MD5-File-0.08.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz\ncp File.pm blib/lib/Digest/MD5/File.pm\nManifying 1 pod document\n DMUEY/Digest-MD5-File-0.08.tar.gz\n /usr/bin/make -- OK\nRunning make test\nPERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t\nt/1.t .. ok\nAll tests successful.\nFiles=1, Tests=6, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.08 cusr 0.00 csys = 0.10 CPU)\nResult: PASS\n DMUEY/Digest-MD5-File-0.08.tar.gz\n /usr/bin/make test -- OK\nRunning make install\nManifying 1 pod document\nInstalling /usr/local/share/perl/5.22.1/Digest/MD5/File.pm\nInstalling /usr/local/man/man3/Digest::MD5::File.3pm\nAppending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod\n DMUEY/Digest-MD5-File-0.08.tar.gz\n /usr/bin/make install -- OK\nRunning install for module \'Digest::HMAC_SHA1\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz.tmp6535"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz.tmp6535 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz.tmp6535\'. Giving up on it.\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz ok\nConfiguring G/GA/GAAS/Digest-HMAC-1.03.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for Digest::HMAC\nWriting MYMETA.yml and MYMETA.json\n GAAS/Digest-HMAC-1.03.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for G/GA/GAAS/Digest-HMAC-1.03.tar.gz\ncp lib/Digest/HMAC_SHA1.pm blib/lib/Digest/HMAC_SHA1.pm\ncp lib/Digest/HMAC.pm blib/lib/Digest/HMAC.pm\ncp lib/Digest/HMAC_MD5.pm blib/lib/Digest/HMAC_MD5.pm\nManifying 3 pod documents\n GAAS/Digest-HMAC-1.03.tar.gz\n /usr/bin/make -- OK\nRunning make test\nPERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t\nt/rfc2202.t .. ok\nAll tests successful.\nFiles=1, Tests=14, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.01 cusr 0.00 csys = 0.03 CPU)\nResult: PASS\n GAAS/Digest-HMAC-1.03.tar.gz\n /usr/bin/make test -- OK\nRunning make install\nManifying 3 pod documents\nInstalling /usr/local/share/perl/5.22.1/Digest/HMAC.pm\nInstalling /usr/local/share/perl/5.22.1/Digest/HMAC_SHA1.pm\nInstalling /usr/local/share/perl/5.22.1/Digest/HMAC_MD5.pm\nInstalling /usr/local/man/man3/Digest::HMAC.3pm\nInstalling /usr/local/man/man3/Digest::HMAC_MD5.3pm\nInstalling /usr/local/man/man3/Digest::HMAC_SHA1.3pm\nAppending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod\n GAAS/Digest-HMAC-1.03.tar.gz\n /usr/bin/make install -- OK\nRunning install for module \'XML::Simple\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/CHECKSUMS\nChecksum for /home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz ok\nConfiguring G/GR/GRANTM/XML-Simple-2.22.tar.gz with Makefile.PL\nChecking if your kit is complete...\nLooks good\nGenerating a Unix-style Makefile\nWriting Makefile for XML::Simple\nWriting MYMETA.yml and MYMETA.json\n GRANTM/XML-Simple-2.22.tar.gz\n /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK\nRunning make for G/GR/GRANTM/XML-Simple-2.22.tar.gz\n---- Unsatisfied dependencies detected during ----\n---- GRANTM/XML-Simple-2.22.tar.gz ----\n XML::NamespaceSupport [requires]\n XML::SAX [requires]\n XML::SAX::Expat [requires]\nRunning install for module \'XML::NamespaceSupport\'\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535\'. Giving up on it.\nFetching with LWP:\nhttp://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nLWP failed with code[500] message[Can\'t connect to apt-mirror.sepia.ceph.com:80]\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535\'. Giving up on it.\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535\'. Giving up on it.\nNo external ftp command available\n\n\nTrying with\n /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535"\nto get\n http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\n\n Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")\n returned status 4 (wstat 1024), left\n/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535 with size 0\n Warning: no success downloading \'/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535\'. Giving up on it.\nNo external ftp command available\n\nPlease check, if the URLs I found in your configuration file\n(http://apt-mirror.sepia.ceph.com/CPAN/) are valid. The urllist can be\nedited. E.g. with \'o conf urllist push ftp://myurl/\'\n\nCould not fetch authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', 'changed': True, 'msg': 'non-zero return code', 'start': '2018-01-11 04:27:07.162952', 'delta': '0:01:42.652700', 'stderr': u'--2018-01-11 04:27:23-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Name or service not known.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 04:27:48-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... 8.43.84.130\nConnecting to apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)|8.43.84.130|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 70480 (69K)\nSaving to: \u2018/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS.tmp6535\u2019\n\n 0K .......... .......... .......... .......... .......... 72% 1.10M 0s\n 50K .......... ........ 100% 862K=0.07s\n\n2018-01-11 04:27:48 (1.01 MB/s) - \u2018/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS.tmp6535\u2019 saved [70480/70480]\n\nWarning: prerequisite Class::Accessor::Fast 0 not found.\nWarning: prerequisite Digest::HMAC_SHA1 0 not found.\nWarning: prerequisite Digest::MD5::File 0 not found.\nWarning: prerequisite LWP::UserAgent::Determined 0 not found.\nWarning: prerequisite XML::Simple 1.08 not found.\n--2018-01-11 04:27:58-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\nWarning: prerequisite XML::NamespaceSupport 1.04 not found.\nWarning: prerequisite XML::SAX 0.15 not found.\nWarning: prerequisite XML::SAX::Expat 0 not found.\n--2018-01-11 04:28:14-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 04:28:25-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 04:28:38-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019\n--2018-01-11 04:28:38-- http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz\nResolving apt-mirror.sepia.ceph.com (apt-mirror.sepia.ceph.com)... failed: Temporary failure in name resolution.\nwget: unable to resolve host address \u2018apt-mirror.sepia.ceph.com\u2019', 'rc': 2, 'invocation': {'module_args': {'creates': None, 'executable': None, 'chdir': None, '_raw_params': 'cpan Amazon::S3', 'removes': None, 'warn': True, '_uses_shell': False, 'stdin': None}}, 'stdout_lines': ['Loading internal null logger. Install Log::Log4perl for logging messages', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/01mailrc.txt.gz', "Reading '/home/ubuntu/.cpan/sources/authors/01mailrc.txt.gz'", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/02packages.details.txt.gz', "Reading '/home/ubuntu/.cpan/sources/modules/02packages.details.txt.gz'", ' Database was generated on Fri, 12 Feb 2016 02:17:02 GMT', 'Warning: This index file is 699 days old.', ' Please check the host you chose as your CPAN mirror for staleness.', " I'll continue but problems seem likely to happen.\x07", '............................................................................DONE', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/modules/03modlist.data.gz', "Reading '/home/ubuntu/.cpan/sources/modules/03modlist.data.gz'", 'DONE', 'Writing /home/ubuntu/.cpan/Metadata', "Running install for module 'Amazon::S3'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6535"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6535 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz.tmp6535'. Giving up on it.", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS.gz', '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/CHECKSUMS.tmp6535"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/T/TI/TIMA/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/T/TI/TIMA/Amazon-S3-0.45.tar.gz ok', "'YAML' not installed, will not store persistent state", 'Configuring T/TI/TIMA/Amazon-S3-0.45.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for Amazon::S3', 'Writing MYMETA.yml and MYMETA.json', ' TIMA/Amazon-S3-0.45.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for T/TI/TIMA/Amazon-S3-0.45.tar.gz', '---- Unsatisfied dependencies detected during ----', '---- TIMA/Amazon-S3-0.45.tar.gz ----', ' Digest::MD5::File [requires]', ' Digest::HMAC_SHA1 [requires]', ' XML::Simple [requires]', ' Class::Accessor::Fast [requires]', ' LWP::UserAgent::Determined [requires]', "Running install for module 'Digest::MD5::File'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/D/DM/DMUEY/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz ok', 'Configuring D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for Digest::MD5::File', 'Writing MYMETA.yml and MYMETA.json', ' DMUEY/Digest-MD5-File-0.08.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for D/DM/DMUEY/Digest-MD5-File-0.08.tar.gz', 'cp File.pm blib/lib/Digest/MD5/File.pm', 'Manifying 1 pod document', ' DMUEY/Digest-MD5-File-0.08.tar.gz', ' /usr/bin/make -- OK', 'Running make test', 'PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t', 't/1.t .. ok', 'All tests successful.', 'Files=1, Tests=6, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.08 cusr 0.00 csys = 0.10 CPU)', 'Result: PASS', ' DMUEY/Digest-MD5-File-0.08.tar.gz', ' /usr/bin/make test -- OK', 'Running make install', 'Manifying 1 pod document', 'Installing /usr/local/share/perl/5.22.1/Digest/MD5/File.pm', 'Installing /usr/local/man/man3/Digest::MD5::File.3pm', 'Appending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod', ' DMUEY/Digest-MD5-File-0.08.tar.gz', ' /usr/bin/make install -- OK', "Running install for module 'Digest::HMAC_SHA1'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz.tmp6535"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz.tmp6535 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz.tmp6535'. Giving up on it.", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GA/GAAS/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/G/GA/GAAS/Digest-HMAC-1.03.tar.gz ok', 'Configuring G/GA/GAAS/Digest-HMAC-1.03.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for Digest::HMAC', 'Writing MYMETA.yml and MYMETA.json', ' GAAS/Digest-HMAC-1.03.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for G/GA/GAAS/Digest-HMAC-1.03.tar.gz', 'cp lib/Digest/HMAC_SHA1.pm blib/lib/Digest/HMAC_SHA1.pm', 'cp lib/Digest/HMAC.pm blib/lib/Digest/HMAC.pm', 'cp lib/Digest/HMAC_MD5.pm blib/lib/Digest/HMAC_MD5.pm', 'Manifying 3 pod documents', ' GAAS/Digest-HMAC-1.03.tar.gz', ' /usr/bin/make -- OK', 'Running make test', 'PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, \'blib/lib\', \'blib/arch\')" t/*.t', 't/rfc2202.t .. ok', 'All tests successful.', 'Files=1, Tests=14, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.01 cusr 0.00 csys = 0.03 CPU)', 'Result: PASS', ' GAAS/Digest-HMAC-1.03.tar.gz', ' /usr/bin/make test -- OK', 'Running make install', 'Manifying 3 pod documents', 'Installing /usr/local/share/perl/5.22.1/Digest/HMAC.pm', 'Installing /usr/local/share/perl/5.22.1/Digest/HMAC_SHA1.pm', 'Installing /usr/local/share/perl/5.22.1/Digest/HMAC_MD5.pm', 'Installing /usr/local/man/man3/Digest::HMAC.3pm', 'Installing /usr/local/man/man3/Digest::HMAC_MD5.3pm', 'Installing /usr/local/man/man3/Digest::HMAC_SHA1.3pm', 'Appending installation info to /usr/local/lib/x86_64-linux-gnu/perl/5.22.1/perllocal.pod', ' GAAS/Digest-HMAC-1.03.tar.gz', ' /usr/bin/make install -- OK', "Running install for module 'XML::Simple'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz', 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/G/GR/GRANTM/CHECKSUMS', 'Checksum for /home/ubuntu/.cpan/sources/authors/id/G/GR/GRANTM/XML-Simple-2.22.tar.gz ok', 'Configuring G/GR/GRANTM/XML-Simple-2.22.tar.gz with Makefile.PL', 'Checking if your kit is complete...', 'Looks good', 'Generating a Unix-style Makefile', 'Writing Makefile for XML::Simple', 'Writing MYMETA.yml and MYMETA.json', ' GRANTM/XML-Simple-2.22.tar.gz', ' /usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK', 'Running make for G/GR/GRANTM/XML-Simple-2.22.tar.gz', '---- Unsatisfied dependencies detected during ----', '---- GRANTM/XML-Simple-2.22.tar.gz ----', ' XML::NamespaceSupport [requires]', ' XML::SAX [requires]', ' XML::SAX::Expat [requires]', "Running install for module 'XML::NamespaceSupport'", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535'. Giving up on it.", 'Fetching with LWP:', 'http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', "LWP failed with code[500] message[Can't connect to apt-mirror.sepia.ceph.com:80]", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535'. Giving up on it.", '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535'. Giving up on it.", 'No external ftp command available', '', '', 'Trying with', ' /usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535"', 'to get', ' http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz', '', ' Function system("/usr/bin/wget -O "/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535" "http://apt-mirror.sepia.ceph.com/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz" ")', ' returned status 4 (wstat 1024), left', '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535 with size 0', " Warning: no success downloading '/home/ubuntu/.cpan/sources/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz.tmp6535'. Giving up on it.", 'No external ftp command available', '', 'Please check, if the URLs I found in your configuration file', '(http://apt-mirror.sepia.ceph.com/CPAN/) are valid. The urllist can be', "edited. E.g. with 'o conf urllist push ftp://myurl/'", '', 'Could not fetch authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-1.11.tar.gz'], 'failed': True}}

fail 2055325 2018-01-10 14:33:09 2018-01-11 04:17:35 2018-01-11 05:07:34 0:49:59 0:11:33 0:38:26 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunks.yaml} 2
Failure Reason:

Command failed on ovh089 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055327 2018-01-10 14:33:10 2018-01-11 04:17:36 2018-01-11 05:05:36 0:48:00 0:11:39 0:36:21 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on ovh012 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055329 2018-01-10 14:33:10 2018-01-11 04:17:38 2018-01-11 04:59:37 0:41:59 0:12:46 0:29:13 ovh master rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

Command failed on ovh067 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055331 2018-01-10 14:33:11 2018-01-11 04:17:40 2018-01-11 04:59:39 0:41:59 0:14:03 0:27:56 ovh master rados/perf/{ceph.yaml objectstore/bluestore.yaml openstack.yaml settings/optimized.yaml workloads/sample_fio.yaml} 1
Failure Reason:

Command failed on ovh095 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055333 2018-01-10 14:33:12 2018-01-11 04:17:55 2018-01-11 04:39:54 0:21:59 ovh master rados/monthrash/{ceph.yaml clusters/9-mons.yaml mon_kv_backend/leveldb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
Failure Reason:

Command failed on ovh074 with status 100: 'sudo apt-get -y install linux-image-generic'

fail 2055335 2018-01-10 14:33:12 2018-01-11 04:18:02 2018-01-11 04:48:01 0:29:59 0:10:30 0:19:29 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

{'ovh009.front.sepia.ceph.com': {'msg': 'All items completed', 'failed': True, 'changed': False}}

fail 2055337 2018-01-10 14:33:13 2018-01-11 04:18:03 2018-01-11 04:56:02 0:37:59 0:10:50 0:27:09 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on ovh097 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055339 2018-01-10 14:33:14 2018-01-11 04:19:38 2018-01-11 04:39:38 0:20:00 0:10:38 0:09:22 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on ovh059 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055341 2018-01-10 14:33:14 2018-01-11 04:19:42 2018-01-11 04:41:41 0:21:59 0:10:50 0:11:09 ovh master rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml leveldb.yaml msgr-failures/few.yaml objectstore/bluestore.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
Failure Reason:

Command failed on ovh040 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055343 2018-01-10 14:33:15 2018-01-11 04:19:51 2018-01-11 04:37:50 0:17:59 0:09:38 0:08:21 ovh master rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_workunit_loadgen_mostlyread.yaml} 2
Failure Reason:

{'ovh097.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', 'E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?'], 'changed': False, '_ansible_no_log': False, 'stdout': "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8\nSuggested packages:\n krb5-doc\nThe following NEW packages will be installed:\n krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9\n libkdb5-8\n0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.\nNeed to get 302 kB of archives.\nAfter this operation, 1296 kB of additional disk space will be used.\nErr:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nGet:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2 [54.5 kB]\nGet:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2 [36.7 kB]\nGet:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2 [37.1 kB]\nGet:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]\nGet:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]\nFetched 278 kB in 19s (13.9 kB/s)\n", 'cache_updated': False, 'invocation': {'module_args': {'autoremove': False, 'force': False, 'force_apt_get': False, 'update_cache': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0, 'dpkg_options': 'force-confdef,force-confold', 'upgrade': None, 'name': 'krb5-user', 'package': ['krb5-user'], 'autoclean': False, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'default_release': None, 'install_recommends': None}}, 'failed': True, 'stderr': "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", 'rc': 100, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'krb5-user\'\' failed: E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n', 'stdout_lines': ['Reading package lists...', 'Building dependency tree...', 'Reading state information...', 'The following additional packages will be installed:', ' krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8', 'Suggested packages:', ' krb5-doc', 'The following NEW packages will be installed:', ' krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9', ' libkdb5-8', '0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.', 'Need to get 302 kB of archives.', 'After this operation, 1296 kB of additional disk space will be used.', 'Err:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Get:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2 [54.5 kB]', 'Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2 [36.7 kB]', 'Get:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2 [37.1 kB]', 'Get:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]', 'Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]', 'Fetched 278 kB in 19s (13.9 kB/s)'], 'cache_update_time': 1515645017}}

fail 2055345 2018-01-10 14:33:16 2018-01-11 04:21:35 2018-01-11 04:57:34 0:35:59 0:11:45 0:24:14 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on ovh081 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055347 2018-01-10 14:33:16 2018-01-11 04:21:41 2018-01-11 04:45:40 0:23:59 0:11:15 0:12:44 ovh master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on ovh066 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055349 2018-01-10 14:33:17 2018-01-11 04:21:55 2018-01-11 04:37:54 0:15:59 0:09:06 0:06:53 ovh master rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} 1
Failure Reason:

{'ovh023.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ["E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkadm5clnt-mit9_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkdb5-8_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", '', 'E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?'], 'changed': False, '_ansible_no_log': False, 'stdout': "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following additional packages will be installed:\n krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8\nSuggested packages:\n krb5-doc\nThe following NEW packages will be installed:\n krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9\n libkdb5-8\n0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.\nNeed to get 302 kB of archives.\nAfter this operation, 1296 kB of additional disk space will be used.\nErr:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nErr:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nErr:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nErr:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2\n Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\nGet:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]\nGet:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]\nFetched 150 kB in 1min 56s (1290 B/s)\n", 'cache_updated': False, 'invocation': {'module_args': {'autoremove': False, 'force': False, 'force_apt_get': False, 'update_cache': None, 'only_upgrade': False, 'deb': None, 'cache_valid_time': 0, 'dpkg_options': 'force-confdef,force-confold', 'upgrade': None, 'name': 'krb5-user', 'package': ['krb5-user'], 'autoclean': False, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'default_release': None, 'install_recommends': None}}, 'failed': True, 'stderr': "E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkadm5clnt-mit9_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkdb5-8_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving 'nova.clouds.archive.ubuntu.com'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", 'rc': 100, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'krb5-user\'\' failed: E: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/kerberos-configs/krb5-config_2.3_all.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libgssrpc4_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkadm5clnt-mit9_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkdb5-8_1.13.2+dfsg-5ubuntu2_amd64.deb Temporary failure resolving \'nova.clouds.archive.ubuntu.com\'\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n', 'stdout_lines': ['Reading package lists...', 'Building dependency tree...', 'Reading state information...', 'The following additional packages will be installed:', ' krb5-config libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9 libkdb5-8', 'Suggested packages:', ' krb5-doc', 'The following NEW packages will be installed:', ' krb5-config krb5-user libgssrpc4 libkadm5clnt-mit9 libkadm5srv-mit9', ' libkdb5-8', '0 upgraded, 6 newly installed, 0 to remove and 4 not upgraded.', 'Need to get 302 kB of archives.', 'After this operation, 1296 kB of additional disk space will be used.', 'Err:1 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main amd64 krb5-config all 2.3', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Err:2 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgssrpc4 amd64 1.13.2+dfsg-5ubuntu2', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Err:3 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5clnt-mit9 amd64 1.13.2+dfsg-5ubuntu2', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Err:4 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkdb5-8 amd64 1.13.2+dfsg-5ubuntu2', " Temporary failure resolving 'nova.clouds.archive.ubuntu.com'", 'Get:5 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libkadm5srv-mit9 amd64 1.13.2+dfsg-5ubuntu2 [51.3 kB]', 'Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 krb5-user amd64 1.13.2+dfsg-5ubuntu2 [98.7 kB]', 'Fetched 150 kB in 1min 56s (1290 B/s)'], 'cache_update_time': 1515645301}}

fail 2055351 2018-01-10 14:33:18 2018-01-11 04:23:43 2018-01-11 04:55:43 0:32:00 0:11:27 0:20:33 ovh master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on ovh008 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'

fail 2055353 2018-01-10 14:33:18 2018-01-11 04:25:35 2018-01-11 04:41:35 0:16:00 0:10:57 0:05:03 ovh master rados/objectstore/keyvaluedb.yaml 1
Failure Reason:

Command failed on ovh033 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=13.0.0-4563-g5fd5e9b-1xenial ceph-mds=13.0.0-4563-g5fd5e9b-1xenial ceph-mgr=13.0.0-4563-g5fd5e9b-1xenial ceph-common=13.0.0-4563-g5fd5e9b-1xenial ceph-fuse=13.0.0-4563-g5fd5e9b-1xenial ceph-test=13.0.0-4563-g5fd5e9b-1xenial radosgw=13.0.0-4563-g5fd5e9b-1xenial python-ceph=13.0.0-4563-g5fd5e9b-1xenial libcephfs2=13.0.0-4563-g5fd5e9b-1xenial libcephfs-dev=13.0.0-4563-g5fd5e9b-1xenial libcephfs-java=13.0.0-4563-g5fd5e9b-1xenial libcephfs-jni=13.0.0-4563-g5fd5e9b-1xenial librados2=13.0.0-4563-g5fd5e9b-1xenial librbd1=13.0.0-4563-g5fd5e9b-1xenial rbd-fuse=13.0.0-4563-g5fd5e9b-1xenial'