Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 4203909 2019-08-09 23:09:59 2019-08-09 23:10:16 2019-08-10 01:56:18 2:46:02 2:28:38 0:17:24 mira master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} 2
pass 4203910 2019-08-09 23:10:00 2019-08-09 23:10:16 2019-08-10 00:02:16 0:52:00 0:36:22 0:15:38 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
fail 4203911 2019-08-09 23:10:01 2019-08-09 23:10:17 2019-08-09 23:26:16 0:15:59 0:06:42 0:09:17 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_write.yaml} 1
Failure Reason:

Command failed on mira041 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4203912 2019-08-09 23:10:01 2019-08-09 23:10:16 2019-08-09 23:48:16 0:38:00 0:21:02 0:16:58 mira master centos 7.6 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
pass 4203913 2019-08-09 23:10:02 2019-08-09 23:10:17 2019-08-10 00:22:17 1:12:00 1:03:02 0:08:58 mira master ubuntu 18.04 rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203914 2019-08-09 23:10:03 2019-08-09 23:10:17 2019-08-09 23:48:16 0:37:59 0:19:14 0:18:45 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
pass 4203915 2019-08-09 23:10:04 2019-08-09 23:10:17 2019-08-09 23:46:16 0:35:59 0:23:06 0:12:53 mira master centos 7.6 rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/many.yaml workloads/pool-create-delete.yaml} 2
pass 4203916 2019-08-09 23:10:04 2019-08-09 23:10:17 2019-08-09 23:24:16 0:13:59 0:05:13 0:08:46 mira master ubuntu 18.04 rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203917 2019-08-09 23:10:05 2019-08-09 23:10:17 2019-08-09 23:50:16 0:39:59 0:30:18 0:09:41 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} 2
pass 4203918 2019-08-09 23:10:06 2019-08-09 23:10:17 2019-08-09 23:40:17 0:30:00 0:21:02 0:08:58 mira master rhel 7.6 rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{rhel_7.yaml}} 1
pass 4203919 2019-08-09 23:10:07 2019-08-09 23:24:26 2019-08-09 23:48:25 0:23:59 0:14:47 0:09:12 mira master centos 7.6 rados/rest/{mgr-restful.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4203920 2019-08-09 23:10:08 2019-08-09 23:26:31 2019-08-10 00:02:30 0:35:59 0:26:57 0:09:02 mira master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203921 2019-08-09 23:10:08 2019-08-09 23:40:30 2019-08-10 00:22:29 0:41:59 0:22:48 0:19:11 mira master centos rados/singleton-flat/valgrind-leaks.yaml 1
pass 4203922 2019-08-09 23:10:09 2019-08-09 23:46:17 2019-08-10 00:06:17 0:20:00 0:10:22 0:09:38 mira master ubuntu 18.04 rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203923 2019-08-09 23:10:10 2019-08-09 23:48:29 2019-08-10 00:26:29 0:38:00 0:15:51 0:22:09 mira master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/crush.yaml} 1
pass 4203924 2019-08-09 23:10:11 2019-08-09 23:48:29 2019-08-10 02:32:31 2:44:02 2:32:14 0:11:48 mira master ubuntu 18.04 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} 4
dead 4203925 2019-08-09 23:10:12 2019-08-09 23:48:29 2019-08-10 00:10:28 0:21:59 mira master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_stress_watch.yaml}
Failure Reason:

reached maximum tries (100) after waiting for 600 seconds

pass 4203926 2019-08-09 23:10:13 2019-08-09 23:50:30 2019-08-10 00:18:29 0:27:59 0:15:47 0:12:12 mira master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/ssh_orchestrator.yaml} 2
pass 4203927 2019-08-09 23:10:13 2019-08-10 00:02:18 2019-08-10 00:28:18 0:26:00 0:15:46 0:10:14 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_omap_write.yaml} 1
pass 4203928 2019-08-09 23:10:14 2019-08-10 00:02:32 2019-08-10 00:50:31 0:47:59 0:32:16 0:15:43 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
pass 4203929 2019-08-09 23:10:15 2019-08-10 00:06:30 2019-08-10 00:42:30 0:36:00 0:26:48 0:09:12 mira master ubuntu 18.04 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4203930 2019-08-09 23:10:16 2019-08-10 00:10:43 2019-08-10 01:50:43 1:40:00 1:28:58 0:11:02 mira master centos 7.6 rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4203931 2019-08-09 23:10:17 2019-08-10 00:18:43 2019-08-10 00:44:42 0:25:59 0:15:59 0:10:00 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
pass 4203932 2019-08-09 23:10:18 2019-08-10 00:22:27 2019-08-10 01:12:27 0:50:00 0:34:47 0:15:13 mira master centos 7.6 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
pass 4203933 2019-08-09 23:10:18 2019-08-10 00:22:31 2019-08-10 00:58:30 0:35:59 0:26:14 0:09:45 mira master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
pass 4203934 2019-08-09 23:10:19 2019-08-10 00:26:42 2019-08-10 01:00:42 0:34:00 0:24:40 0:09:20 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} 2
pass 4203935 2019-08-09 23:10:20 2019-08-10 00:28:22 2019-08-10 01:40:22 1:12:00 1:01:11 0:10:49 mira master ubuntu 18.04 rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4203936 2019-08-09 23:10:21 2019-08-10 00:42:43 2019-08-10 00:58:43 0:16:00 0:07:20 0:08:40 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_fio.yaml} 1
Failure Reason:

Command failed on mira110 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4203937 2019-08-09 23:10:22 2019-08-10 00:44:55 2019-08-10 01:30:55 0:46:00 0:39:16 0:06:44 mira master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} 2
pass 4203938 2019-08-09 23:10:22 2019-08-10 00:50:45 2019-08-10 01:12:44 0:21:59 0:06:33 0:15:26 mira master ubuntu 18.04 rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203939 2019-08-09 23:10:23 2019-08-10 00:58:43 2019-08-10 01:28:43 0:30:00 0:22:19 0:07:41 mira master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_striper.yaml} 2
pass 4203940 2019-08-09 23:10:24 2019-08-10 00:58:44 2019-08-10 01:38:43 0:39:59 0:33:36 0:06:23 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} 2
pass 4203941 2019-08-09 23:10:25 2019-08-10 01:00:55 2019-08-10 01:16:54 0:15:59 0:05:52 0:10:07 mira master ubuntu 18.04 rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203942 2019-08-09 23:10:26 2019-08-10 01:12:35 2019-08-10 01:48:34 0:35:59 0:14:35 0:21:24 mira master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} 2
fail 4203943 2019-08-09 23:10:26 2019-08-10 01:12:45 2019-08-10 02:32:45 1:20:00 0:56:08 0:23:52 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
Failure Reason:

"2019-08-10T02:17:39.272910+0000 osd.4 (osd.4) 3 : cluster [ERR] 7.13 required past_interval bounds are empty [736,733) but past_intervals is not: ([625,732] all_participants=4,7,11 intervals=([625,658] acting 4,11),([659,732] acting 7,11))" in cluster log

pass 4203944 2019-08-09 23:10:27 2019-08-10 01:17:08 2019-08-10 01:53:08 0:36:00 0:28:15 0:07:45 mira master rhel 7.6 rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_recovery.yaml} 3
pass 4203945 2019-08-09 23:10:28 2019-08-10 01:28:44 2019-08-10 02:26:44 0:58:00 0:44:34 0:13:26 mira master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
pass 4203946 2019-08-09 23:10:29 2019-08-10 01:31:08 2019-08-10 02:13:08 0:42:00 0:33:00 0:09:00 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
pass 4203947 2019-08-09 23:10:30 2019-08-10 01:38:45 2019-08-10 02:02:44 0:23:59 0:13:37 0:10:22 mira master ubuntu 18.04 rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203948 2019-08-09 23:10:30 2019-08-10 01:40:35 2019-08-10 02:02:34 0:21:59 0:11:23 0:10:36 mira master ubuntu 18.04 rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4203949 2019-08-09 23:10:31 2019-08-10 01:48:48 2019-08-10 02:14:47 0:25:59 0:20:01 0:05:58 mira master rhel 7.6 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/sample_radosbench.yaml} 1
Failure Reason:

Command failed on mira115 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4203950 2019-08-09 23:10:32 2019-08-10 01:50:57 2019-08-10 04:26:59 2:36:02 2:16:55 0:19:07 mira master rhel 7.6 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/one.yaml workloads/rados_5925.yaml} 2
pass 4203951 2019-08-09 23:10:33 2019-08-10 01:53:21 2019-08-10 02:33:21 0:40:00 0:20:50 0:19:10 mira master centos 7.6 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
pass 4203952 2019-08-09 23:10:33 2019-08-10 01:56:32 2019-08-10 04:30:33 2:34:01 2:15:41 0:18:20 mira master rhel 7.6 rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 1
pass 4203953 2019-08-09 23:10:34 2019-08-10 02:02:48 2019-08-10 02:48:48 0:46:00 0:33:10 0:12:50 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
pass 4203954 2019-08-09 23:10:35 2019-08-10 02:02:48 2019-08-10 02:34:47 0:31:59 0:18:59 0:13:00 mira master centos 7.6 rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4203955 2019-08-09 23:10:36 2019-08-10 02:13:09 2019-08-10 03:05:09 0:52:00 0:44:10 0:07:50 mira master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_workunit_loadgen_big.yaml} 2
pass 4203956 2019-08-09 23:10:37 2019-08-10 02:15:01 2019-08-10 03:15:01 1:00:00 0:51:08 0:08:52 mira master rhel 7.6 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4203957 2019-08-09 23:10:37 2019-08-10 02:26:58 2019-08-10 03:04:58 0:38:00 0:24:23 0:13:37 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
pass 4203958 2019-08-09 23:10:38 2019-08-10 02:32:32 2019-08-10 02:50:31 0:17:59 0:08:23 0:09:36 mira master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/crash.yaml} 2
fail 4203959 2019-08-09 23:10:39 2019-08-10 02:32:47 2019-08-10 03:00:46 0:27:59 0:14:50 0:13:09 mira master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_read_write.yaml} 1
Failure Reason:

Command failed on mira072 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4203960 2019-08-09 23:10:40 2019-08-10 02:33:22 2019-08-10 02:51:22 0:18:00 0:07:51 0:10:09 mira master ubuntu 18.04 rados/singleton/{all/mon-auth-caps.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203961 2019-08-09 23:10:40 2019-08-10 02:35:02 2019-08-10 03:29:01 0:53:59 0:44:09 0:09:50 mira master centos 7.6 rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/erasure-code.yaml} 1
pass 4203962 2019-08-09 23:10:41 2019-08-10 02:48:49 2019-08-10 03:26:49 0:38:00 0:24:21 0:13:39 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
fail 4203963 2019-08-09 23:10:42 2019-08-10 02:50:45 2019-08-10 03:50:45 1:00:00 0:46:17 0:13:43 mira master centos 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} 2
Failure Reason:

"2019-08-10T03:32:06.356013+0000 mon.a (mon.1) 511 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 4203964 2019-08-09 23:10:43 2019-08-10 02:51:23 2019-08-10 03:25:22 0:33:59 0:24:30 0:09:29 mira master rhel 7.6 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4203965 2019-08-09 23:10:44 2019-08-10 03:01:00 2019-08-10 06:01:02 3:00:02 2:42:04 0:17:58 mira master rhel 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
pass 4203966 2019-08-09 23:10:44 2019-08-10 03:05:12 2019-08-10 03:43:12 0:38:00 0:22:32 0:15:28 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
pass 4203967 2019-08-09 23:10:45 2019-08-10 03:05:12 2019-08-10 03:35:12 0:30:00 0:24:04 0:05:56 mira master rhel 7.6 rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 1
pass 4203968 2019-08-09 23:10:46 2019-08-10 03:15:15 2019-08-10 03:41:14 0:25:59 0:19:12 0:06:47 mira master rhel 7.6 rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 1
pass 4203969 2019-08-09 23:10:47 2019-08-10 03:25:36 2019-08-10 04:09:36 0:44:00 0:30:46 0:13:14 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
fail 4203970 2019-08-09 23:10:48 2019-08-10 03:26:55 2019-08-10 03:52:54 0:25:59 0:15:05 0:10:54 mira master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

Command failed on mira002 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4203971 2019-08-09 23:10:48 2019-08-10 03:29:15 2019-08-10 03:45:14 0:15:59 0:05:52 0:10:07 mira master ubuntu 18.04 rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203972 2019-08-09 23:10:49 2019-08-10 03:35:25 2019-08-10 03:57:24 0:21:59 0:12:23 0:09:36 mira master ubuntu 18.04 rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203973 2019-08-09 23:10:50 2019-08-10 03:41:28 2019-08-10 04:21:27 0:39:59 0:27:26 0:12:33 mira master centos 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_workunit_loadgen_mix.yaml} 2
pass 4203974 2019-08-09 23:10:51 2019-08-10 03:43:13 2019-08-10 04:07:12 0:23:59 0:14:19 0:09:40 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
fail 4203975 2019-08-09 23:10:51 2019-08-10 03:45:21 2019-08-10 04:33:21 0:48:00 0:40:53 0:07:07 mira master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} 2
Failure Reason:

Test failure: test_create_get_update_delete_w_tenant (tasks.mgr.dashboard.test_rgw.RgwBucketTest)

pass 4203976 2019-08-09 23:10:52 2019-08-10 03:50:46 2019-08-10 04:26:46 0:36:00 0:27:16 0:08:44 mira master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203977 2019-08-09 23:10:53 2019-08-10 03:52:56 2019-08-10 04:38:55 0:45:59 0:35:40 0:10:19 mira master ubuntu 18.04 rados/singleton/{all/mon-config.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203978 2019-08-09 23:10:54 2019-08-10 03:57:39 2019-08-10 06:49:40 2:52:01 2:33:33 0:18:28 mira master rhel 7.6 rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/sync-many.yaml workloads/rados_api_tests.yaml} 2
pass 4203979 2019-08-09 23:10:55 2019-08-10 04:07:26 2019-08-10 04:35:26 0:28:00 0:20:40 0:07:20 mira master rhel 7.6 rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 1
pass 4203980 2019-08-09 23:10:56 2019-08-10 04:09:37 2019-08-10 04:49:37 0:40:00 0:27:40 0:12:20 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
pass 4203981 2019-08-09 23:10:56 2019-08-10 04:21:41 2019-08-10 04:51:40 0:29:59 0:20:12 0:09:47 mira master rhel 7.6 rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_no_skews.yaml} 3
pass 4203982 2019-08-09 23:10:57 2019-08-10 04:27:00 2019-08-10 04:46:59 0:19:59 0:10:06 0:09:53 mira master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
pass 4203983 2019-08-09 23:10:58 2019-08-10 04:27:00 2019-08-10 05:35:00 1:08:00 0:43:08 0:24:52 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
fail 4203984 2019-08-09 23:10:59 2019-08-10 04:30:47 2019-08-10 04:48:46 0:17:59 0:07:27 0:10:32 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4K_rand_read.yaml} 1
Failure Reason:

Command failed on mira018 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4203985 2019-08-09 23:11:00 2019-08-10 04:33:35 2019-08-10 05:41:35 1:08:00 0:58:21 0:09:39 mira master ubuntu 18.04 rados/singleton/{all/osd-backfill.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203986 2019-08-09 23:11:00 2019-08-10 04:35:39 2019-08-10 05:29:39 0:54:00 0:46:07 0:07:53 mira master rhel 7.6 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4203987 2019-08-09 23:11:01 2019-08-10 04:39:09 2019-08-10 05:27:08 0:47:59 0:33:26 0:14:33 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
pass 4203988 2019-08-09 23:11:02 2019-08-10 04:47:04 2019-08-10 05:35:04 0:48:00 0:21:22 0:26:38 mira master centos 7.6 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
pass 4203989 2019-08-09 23:11:03 2019-08-10 04:48:59 2019-08-10 05:34:59 0:46:00 0:39:28 0:06:32 mira master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} 2
pass 4203990 2019-08-09 23:11:04 2019-08-10 04:49:38 2019-08-10 06:11:39 1:22:01 1:11:03 0:10:58 mira master centos 7.6 rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4203991 2019-08-09 23:11:04 2019-08-10 04:51:54 2019-08-10 05:39:53 0:47:59 0:31:45 0:16:14 mira master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} 2
pass 4203992 2019-08-09 23:11:05 2019-08-10 05:27:22 2019-08-10 05:59:22 0:32:00 0:21:37 0:10:23 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
pass 4203993 2019-08-09 23:11:06 2019-08-10 05:29:41 2019-08-10 06:01:40 0:31:59 0:24:19 0:07:40 mira master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/failover.yaml} 2
fail 4203994 2019-08-09 23:11:07 2019-08-10 05:35:12 2019-08-10 08:11:14 2:36:02 2:16:36 0:19:26 mira master rhel 7.6 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4K_rand_rw.yaml} 1
Failure Reason:

Command failed on mira018 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4203995 2019-08-09 23:11:08 2019-08-10 05:35:12 2019-08-10 06:17:12 0:42:00 0:30:30 0:11:30 mira master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
pass 4203996 2019-08-09 23:11:13 2019-08-10 05:35:13 2019-08-10 06:09:12 0:33:59 0:23:06 0:10:53 mira master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
pass 4203997 2019-08-09 23:11:14 2019-08-10 05:40:07 2019-08-10 06:22:06 0:41:59 0:26:24 0:15:35 mira master ubuntu 18.04 rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4203998 2019-08-09 23:11:15 2019-08-10 05:41:36 2019-08-10 06:11:35 0:29:59 0:20:04 0:09:55 mira master rhel 7.6 rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 1
pass 4203999 2019-08-09 23:11:16 2019-08-10 05:59:23 2019-08-10 06:27:22 0:27:59 0:18:58 0:09:01 mira master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/misc.yaml} 1
pass 4204000 2019-08-09 23:11:17 2019-08-10 06:01:16 2019-08-10 06:21:15 0:19:59 0:09:23 0:10:36 mira master ubuntu 18.04 rados/singleton/{all/osd-recovery.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4204001 2019-08-09 23:11:17 2019-08-10 06:01:41 2019-08-10 06:39:41 0:38:00 0:31:51 0:06:09 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} 2
pass 4204002 2019-08-09 23:11:18 2019-08-10 06:09:26 2019-08-10 06:51:26 0:42:00 0:19:11 0:22:49 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 4
pass 4204003 2019-08-09 23:11:19 2019-08-10 06:11:37 2019-08-10 06:27:36 0:15:59 0:06:27 0:09:32 mira master ubuntu 18.04 rados/singleton/{all/peer.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4204004 2019-08-09 23:11:20 2019-08-10 06:11:40 2019-08-10 08:51:42 2:40:02 2:21:53 0:18:09 mira master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/readwrite.yaml} 2
pass 4204005 2019-08-09 23:11:20 2019-08-10 06:17:25 2019-08-10 08:57:27 2:40:02 2:21:40 0:18:22 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} 2
fail 4204006 2019-08-09 23:11:21 2019-08-10 06:21:30 2019-08-10 06:45:29 0:23:59 0:14:36 0:09:23 mira master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4M_rand_read.yaml} 1
Failure Reason:

Command failed on mira115 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4204007 2019-08-09 23:11:22 2019-08-10 06:22:08 2019-08-10 07:00:07 0:37:59 0:16:51 0:21:08 mira master centos 7.6 rados/singleton/{all/pg-autoscaler.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 2
fail 4204008 2019-08-09 23:11:23 2019-08-10 06:27:37 2019-08-10 06:39:36 0:11:59 0:02:44 0:09:15 mira master ubuntu 18.04 rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

500 Server Error: Internal Server Error for url: https://3.chacra.ceph.com/repos/ceph/wip-kefu-testing-2019-08-09-2332/4801356e9e60dc30647e75824273e60f8d2aba22/ubuntu/bionic/flavors/default/repo

fail 4204009 2019-08-09 23:11:24 2019-08-10 06:27:37 2019-08-10 06:41:36 0:13:59 0:03:03 0:10:56 mira master ubuntu 18.04 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

500 Server Error: Internal Server Error for url: https://3.chacra.ceph.com/repos/ceph/wip-kefu-testing-2019-08-09-2332/4801356e9e60dc30647e75824273e60f8d2aba22/ubuntu/bionic/flavors/default/repo

pass 4204010 2019-08-09 23:11:24 2019-08-10 06:39:37 2019-08-10 07:17:37 0:38:00 0:27:51 0:10:09 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
pass 4204011 2019-08-09 23:11:25 2019-08-10 06:39:42 2019-08-10 06:59:41 0:19:59 0:09:14 0:10:45 mira master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/insights.yaml} 2
pass 4204012 2019-08-09 23:11:26 2019-08-10 06:41:50 2019-08-10 07:41:50 1:00:00 0:37:38 0:22:22 mira master centos 7.6 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4204013 2019-08-09 23:11:27 2019-08-10 06:45:44 2019-08-10 07:09:43 0:23:59 0:13:11 0:10:48 mira master centos 7.6 rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4204014 2019-08-09 23:11:27 2019-08-10 06:49:55 2019-08-10 07:19:55 0:30:00 0:20:26 0:09:34 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
fail 4204015 2019-08-09 23:11:28 2019-08-10 06:51:40 2019-08-10 07:09:39 0:17:59 0:07:29 0:10:30 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_rw.yaml} 1
Failure Reason:

Command failed on mira101 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4204016 2019-08-09 23:11:29 2019-08-10 06:59:55 2019-08-10 07:51:55 0:52:00 0:42:19 0:09:41 mira master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
pass 4204017 2019-08-09 23:11:30 2019-08-10 07:00:09 2019-08-10 07:26:08 0:25:59 0:12:06 0:13:53 mira master centos 7.6 rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/mon_clock_with_skews.yaml} 2
fail 4204018 2019-08-09 23:11:31 2019-08-10 07:09:42 2019-08-10 14:09:48 7:00:06 6:46:53 0:13:13 mira master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on mira101 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4801356e9e60dc30647e75824273e60f8d2aba22 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 4204019 2019-08-09 23:11:31 2019-08-10 07:09:44 2019-08-10 07:59:44 0:50:00 0:34:14 0:15:46 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
pass 4204020 2019-08-09 23:11:32 2019-08-10 07:17:38 2019-08-10 07:51:38 0:34:00 0:23:18 0:10:42 mira master centos 7.6 rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4204021 2019-08-09 23:11:33 2019-08-10 07:20:09 2019-08-10 07:46:09 0:26:00 0:16:32 0:09:28 mira master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/repair_test.yaml} 2
pass 4204022 2019-08-09 23:11:34 2019-08-10 07:26:22 2019-08-10 07:56:21 0:29:59 0:20:05 0:09:54 mira master ubuntu 18.04 rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4204023 2019-08-09 23:11:34 2019-08-10 07:41:52 2019-08-10 08:43:52 1:02:00 0:50:00 0:12:00 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 2
pass 4204024 2019-08-09 23:11:35 2019-08-10 07:46:10 2019-08-10 08:08:09 0:21:59 0:11:02 0:10:57 mira master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
pass 4204025 2019-08-09 23:11:36 2019-08-10 07:51:52 2019-08-10 08:35:51 0:43:59 0:30:45 0:13:14 mira master centos 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} 2
fail 4204026 2019-08-09 23:11:37 2019-08-10 07:51:56 2019-08-10 10:21:57 2:30:01 2:11:05 0:18:56 mira master rhel 7.6 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
Failure Reason:

Command failed on mira118 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204027 2019-08-09 23:11:38 2019-08-10 07:56:23 2019-08-10 08:22:22 0:25:59 0:13:46 0:12:13 mira master centos 7.6 rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4204028 2019-08-09 23:11:39 2019-08-10 07:59:58 2019-08-10 08:39:58 0:40:00 0:31:02 0:08:58 mira master rhel 7.6 rados/singleton/{all/random-eio.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 2
pass 4204029 2019-08-09 23:11:39 2019-08-10 08:08:23 2019-08-10 08:54:23 0:46:00 0:38:41 0:07:19 mira master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/module_selftest.yaml} 2
fail 4204030 2019-08-09 23:11:40 2019-08-10 08:11:29 2019-08-10 08:27:28 0:15:59 0:07:19 0:08:40 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_write.yaml} 1
Failure Reason:

Command failed on mira065 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4204031 2019-08-09 23:11:41 2019-08-10 08:22:24 2019-08-10 09:08:23 0:45:59 0:24:16 0:21:43 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
pass 4204032 2019-08-09 23:11:42 2019-08-10 08:27:42 2019-08-10 09:03:42 0:36:00 0:27:27 0:08:33 mira master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4204033 2019-08-09 23:11:42 2019-08-10 08:36:05 2019-08-10 09:00:05 0:24:00 0:16:18 0:07:42 mira master rhel 7.6 rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/mon.yaml} 1
Failure Reason:

Command failed on mira002 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204034 2019-08-09 23:11:43 2019-08-10 08:39:59 2019-08-10 11:10:01 2:30:02 2:10:47 0:19:15 mira master rhel 7.6 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204035 2019-08-09 23:11:44 2019-08-10 08:43:53 2019-08-10 09:09:52 0:25:59 0:16:06 0:09:53 mira master centos 7.6 rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

+ sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --no-mon-config --op update-mon-db --mon-store-path /home/ubuntu/cephtest/mon-store /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-3688-g4801356/rpm/el7/BUILD/ceph-15.0.0-3688-g4801356/src/osd/OSDMap.cc: In function 'int OSDMap::apply_incremental(const OSDMap::Incremental&)' thread 7f34a47f5980 time 2019-08-10T09:05:15.028924+0000 /home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/15.0.0-3688-g4801356/rpm/el7/BUILD/ceph-15.0.0-3688-g4801356/src/osd/OSDMap.cc: 2001: FAILED ceph_assert(inc.epoch == epoch+1) ceph version 15.0.0-3688-g4801356 (4801356e9e60dc30647e75824273e60f8d2aba22) octopus (dev) 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x7f349a5c9f95] 2: (()+0x26115d) [0x7f349a5ca15d] 3: (OSDMap::apply_incremental(OSDMap::Incremental const&)+0x1f35) [0x7f349a99db95] 4: (update_mon_db(ObjectStore&, OSDSuperblock&, std::string const&, std::string const&)+0x2102) [0x555c187aeb92] 5: (main()+0x557e) [0x555c18733dee] 6: (__libc_start_main()+0xf5) [0x7f349847d495] 7: (()+0x3cab80) [0x555c18763b80] *** Caught signal (Aborted) ** in thread 7f34a47f5980 thread_name:ceph-objectstor ceph version 15.0.0-3688-g4801356 (4801356e9e60dc30647e75824273e60f8d2aba22) octopus (dev) 1: (()+0xf5d0) [0x7f3499abb5d0] 2: (gsignal()+0x37) [0x7f34984912c7] 3: (abort()+0x148) [0x7f34984929b8] 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x199) [0x7f349a5c9fe4] 5: (()+0x26115d) [0x7f349a5ca15d] 6: (OSDMap::apply_incremental(OSDMap::Incremental const&)+0x1f35) [0x7f349a99db95] 7: (update_mon_db(ObjectStore&, OSDSuperblock&, std::string const&, std::string const&)+0x2102) [0x555c187aeb92] 8: (main()+0x557e) [0x555c18733dee] 9: (__libc_start_main()+0xf5) [0x7f349847d495] 10: (()+0x3cab80) [0x555c18763b80]

pass 4204036 2019-08-09 23:11:45 2019-08-10 08:51:56 2019-08-10 09:29:56 0:38:00 0:24:45 0:13:15 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} 2
pass 4204037 2019-08-09 23:11:46 2019-08-10 08:54:37 2019-08-10 10:02:37 1:08:00 0:50:24 0:17:36 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
pass 4204038 2019-08-09 23:11:46 2019-08-10 08:57:29 2019-08-10 09:23:28 0:25:59 0:15:35 0:10:24 mira master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rgw_snaps.yaml} 2
fail 4204039 2019-08-09 23:11:47 2019-08-10 09:00:06 2019-08-10 09:26:05 0:25:59 0:16:45 0:09:14 mira master rhel 7.6 rados/singleton-nomsgr/{all/lazy_omap_stats_output.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 1
Failure Reason:

Command failed on mira002 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204040 2019-08-09 23:11:48 2019-08-10 09:03:50 2019-08-10 10:01:50 0:58:00 0:46:34 0:11:26 mira master centos 7.6 rados/singleton/{all/recovery-preemption.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
fail 4204041 2019-08-09 23:11:49 2019-08-10 09:08:38 2019-08-10 09:24:37 0:15:59 0:06:51 0:09:08 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4K_rand_read.yaml} 1
Failure Reason:

Command failed on mira018 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

fail 4204042 2019-08-09 23:11:49 2019-08-10 09:10:09 2019-08-10 09:34:08 0:23:59 0:16:47 0:07:12 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} 2
Failure Reason:

Command failed on mira063 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204043 2019-08-09 23:11:50 2019-08-10 09:23:35 2019-08-10 10:19:35 0:56:00 0:33:09 0:22:51 mira master centos 7.6 rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} 2
pass 4204044 2019-08-09 23:11:51 2019-08-10 09:24:52 2019-08-10 10:18:57 0:54:05 0:33:20 0:20:45 mira master centos 7.6 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
pass 4204045 2019-08-09 23:11:52 2019-08-10 09:26:21 2019-08-10 09:42:20 0:15:59 0:06:14 0:09:45 mira master ubuntu 18.04 rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4204046 2019-08-09 23:11:53 2019-08-10 09:29:57 2019-08-10 11:55:58 2:26:01 2:07:11 0:18:50 mira master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
Failure Reason:

'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdd || sgdisk --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-10 11:53:52.267137'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.434514', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8VMXN', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-08-10 11:53:55.177540', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8VMXN', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-10 11:53:53.743026'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011597', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d20112c5400'], u'uuids': [u'0d3dd359-c689-4191-9a18-3aa42ddbff58']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211LB4E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-08-10 11:53:56.438874', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d20112c5400'], u'uuids': [u'0d3dd359-c689-4191-9a18-3aa42ddbff58']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211LB4E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-10 11:53:55.427277'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011544', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011e25500'], u'uuids': [u'0ad1c89a-1aa7-4932-a289-da06f431ca49']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N2112NUE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdg'}, 'ansible_loop_var': u'item', u'end': u'2019-08-10 11:53:57.703445', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011e25500'], u'uuids': [u'0ad1c89a-1aa7-4932-a289-da06f431ca49']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N2112NUE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdg'}, u'cmd': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-10 11:53:56.691901'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP91N0M', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP91N0M', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011703', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011b25000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211RK0E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-08-10 11:53:58.997831', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011b25000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211RK0E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-10 11:53:57.986128'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011355', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011335800'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211338E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-08-10 11:54:00.265140', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011335800'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211338E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-10 11:53:59.253785'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.027644', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a05200'], u'uuids': [u'6ed12035-2c70-4b9e-a071-4b4430bcd40d']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211PZBE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdh'}, 'ansible_loop_var': u'item', u'end': u'2019-08-10 11:54:01.575307', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a05200'], u'uuids': [u'6ed12035-2c70-4b9e-a071-4b4430bcd40d']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211PZBE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdh'}, u'cmd': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-10 11:54:00.547663'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.008554', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8VMXN', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-08-10 11:54:01.846757', '_ansible_no_log': False, u'start': u'2019-08-10 11:54:01.838203', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8VMXN', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd')

fail 4204047 2019-08-09 23:11:53 2019-08-10 09:34:10 2019-08-10 10:02:09 0:27:59 0:17:20 0:10:39 mira master rhel 7.6 rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 2
Failure Reason:

Command failed on mira063 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204048 2019-08-09 23:11:54 2019-08-10 09:42:34 2019-08-10 10:08:33 0:25:59 0:17:03 0:08:56 mira master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Command failed on mira059 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204049 2019-08-09 23:11:55 2019-08-10 10:02:05 2019-08-10 10:36:04 0:33:59 0:20:37 0:13:22 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} 2
fail 4204050 2019-08-09 23:11:56 2019-08-10 10:02:10 2019-08-10 12:32:12 2:30:02 2:12:23 0:17:39 mira master rhel 7.6 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4K_seq_read.yaml} 1
Failure Reason:

Command failed on mira065 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204051 2019-08-09 23:11:56 2019-08-10 10:02:38 2019-08-10 10:30:38 0:28:00 0:15:11 0:12:49 mira master centos 7.6 rados/singleton/{all/test-crash.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
pass 4204052 2019-08-09 23:11:57 2019-08-10 10:08:35 2019-08-10 10:40:34 0:31:59 0:21:19 0:10:40 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} 2
fail 4204053 2019-08-09 23:11:58 2019-08-10 10:19:04 2019-08-10 10:47:03 0:27:59 0:17:58 0:10:01 mira master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/scrub_test.yaml} 2
Failure Reason:

Command failed on mira002 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204054 2019-08-09 23:11:59 2019-08-10 10:19:36 2019-08-10 10:35:35 0:15:59 0:07:14 0:08:45 mira master ubuntu 18.04 rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4204055 2019-08-09 23:12:00 2019-08-10 10:22:07 2019-08-10 10:54:06 0:31:59 0:18:32 0:13:27 mira master centos 7.6 rados/multimon/{clusters/6.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/mon_recovery.yaml} 2
pass 4204056 2019-08-09 23:12:00 2019-08-10 10:30:43 2019-08-10 10:52:42 0:21:59 0:10:43 0:11:16 mira master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} 2
fail 4204057 2019-08-09 23:12:01 2019-08-10 10:35:37 2019-08-10 13:05:38 2:30:01 2:11:23 0:18:38 mira master rhel 7.6 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
Failure Reason:

Command failed on mira041 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204058 2019-08-09 23:12:02 2019-08-10 10:36:06 2019-08-10 11:00:05 0:23:59 0:10:32 0:13:27 mira master centos 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} 2
Failure Reason:

Command failed on mira035 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204059 2019-08-09 23:12:03 2019-08-10 10:40:45 2019-08-10 11:16:44 0:35:59 0:11:18 0:24:41 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
Failure Reason:

Command failed on mira063 with status 1: "sudo yum -y install '' '' ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse"

fail 4204060 2019-08-09 23:12:03 2019-08-10 10:47:18 2019-08-10 11:19:17 0:31:59 0:10:24 0:21:35 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
Failure Reason:

Command failed on mira002 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204061 2019-08-09 23:12:04 2019-08-10 10:52:56 2019-08-10 11:10:56 0:18:00 0:07:39 0:10:21 mira master ubuntu 18.04 rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on mira118 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4801356e9e60dc30647e75824273e60f8d2aba22 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 4204062 2019-08-09 23:12:05 2019-08-10 10:54:21 2019-08-10 11:20:20 0:25:59 0:12:27 0:13:32 mira master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 4204063 2019-08-09 23:12:06 2019-08-10 11:00:19 2019-08-10 11:24:18 0:23:59 0:10:22 0:13:37 mira master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} 2
Failure Reason:

Command failed on mira035 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204064 2019-08-09 23:12:07 2019-08-10 11:10:07 2019-08-10 11:26:06 0:15:59 0:06:46 0:09:13 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_rand_read.yaml} 1
Failure Reason:

Command failed on mira118 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

pass 4204065 2019-08-09 23:12:07 2019-08-10 11:10:57 2019-08-10 11:46:57 0:36:00 0:26:47 0:09:13 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
fail 4204066 2019-08-09 23:12:08 2019-08-10 11:16:59 2019-08-10 11:40:58 0:23:59 0:10:32 0:13:27 mira master centos 7.6 rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 2
Failure Reason:

Command failed on mira063 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204067 2019-08-09 23:12:09 2019-08-10 11:19:19 2019-08-10 11:41:18 0:21:59 0:10:11 0:11:48 mira master centos 7.6 rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira117 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204068 2019-08-09 23:12:10 2019-08-10 11:20:34 2019-08-10 11:50:33 0:29:59 0:10:17 0:19:42 mira master centos 7.6 rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira018 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204069 2019-08-09 23:12:10 2019-08-10 11:24:20 2019-08-10 13:54:21 2:30:01 2:19:48 0:10:13 mira master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/osd.yaml} 1
fail 4204070 2019-08-09 23:12:11 2019-08-10 11:26:21 2019-08-10 11:54:20 0:27:59 0:18:18 0:09:41 mira master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_api_tests.yaml} 2
Failure Reason:

Command failed on mira118 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204071 2019-08-09 23:12:12 2019-08-10 11:41:12 2019-08-10 12:09:12 0:28:00 0:14:33 0:13:27 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on mira117 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204072 2019-08-09 23:12:13 2019-08-10 11:41:19 2019-08-10 12:03:18 0:21:59 0:10:36 0:11:23 mira master centos 7.6 rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 2
Failure Reason:

Command failed on mira063 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204073 2019-08-09 23:12:14 2019-08-10 11:47:00 2019-08-10 12:10:59 0:23:59 0:17:11 0:06:48 mira master rhel 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} 2
Failure Reason:

Command failed on mira064 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204074 2019-08-09 23:12:14 2019-08-10 11:50:35 2019-08-10 12:16:34 0:25:59 0:17:35 0:08:24 mira master rhel 7.6 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/many.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on mira035 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204075 2019-08-09 23:12:15 2019-08-10 11:54:22 2019-08-10 12:22:21 0:27:59 0:14:02 0:13:57 mira master centos 7.6 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on mira118 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204076 2019-08-09 23:12:16 2019-08-10 11:56:14 2019-08-10 12:28:13 0:31:59 0:15:39 0:16:20 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
Failure Reason:

Command failed on mira088 with status 1: "sudo yum -y install '' '' ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse"

fail 4204077 2019-08-09 23:12:17 2019-08-10 12:03:36 2019-08-10 12:27:35 0:23:59 0:14:02 0:09:57 mira master centos 7.6 rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_seq_read.yaml} 1
Failure Reason:

Command failed on mira063 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204078 2019-08-09 23:12:18 2019-08-10 12:09:13 2019-08-10 14:39:15 2:30:02 2:11:54 0:18:08 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

Command failed on mira117 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204079 2019-08-09 23:12:19 2019-08-10 12:11:14 2019-08-10 12:37:14 0:26:00 0:18:37 0:07:23 mira master rhel 7.6 rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 2
Failure Reason:

Command failed on mira109 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204080 2019-08-09 23:12:19 2019-08-10 12:16:48 2019-08-10 14:48:49 2:32:01 2:14:07 0:17:54 mira master rhel 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/prometheus.yaml} 2
Failure Reason:

Command failed on mira066 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204081 2019-08-09 23:12:20 2019-08-10 12:22:36 2019-08-10 13:00:36 0:38:00 0:27:27 0:10:33 mira master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
pass 4204082 2019-08-09 23:12:21 2019-08-10 12:27:37 2019-08-10 13:01:36 0:33:59 0:24:56 0:09:03 mira master ubuntu 18.04 rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 2
fail 4204083 2019-08-09 23:12:22 2019-08-10 12:28:14 2019-08-10 12:56:19 0:28:05 0:18:29 0:09:36 mira master rhel 7.6 rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 2
Failure Reason:

Command failed on mira107 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204084 2019-08-09 23:12:22 2019-08-10 12:32:13 2019-08-10 13:06:12 0:33:59 0:10:45 0:23:14 mira master centos 7.6 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
Failure Reason:

Command failed on mira088 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204085 2019-08-09 23:12:23 2019-08-10 12:37:29 2019-08-10 13:19:29 0:42:00 0:32:16 0:09:44 mira master ubuntu 18.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
fail 4204086 2019-08-09 23:12:24 2019-08-10 12:56:34 2019-08-10 13:22:33 0:25:59 0:17:34 0:08:25 mira master rhel 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_cls_all.yaml} 2
Failure Reason:

Command failed on mira107 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204087 2019-08-09 23:12:25 2019-08-10 13:00:50 2019-08-10 13:22:49 0:21:59 0:10:31 0:11:28 mira master centos 7.6 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira118 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204088 2019-08-09 23:12:26 2019-08-10 13:01:37 2019-08-10 13:17:36 0:15:59 0:06:28 0:09:31 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_write.yaml} 1
Failure Reason:

Command failed on mira002 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

fail 4204089 2019-08-09 23:12:26 2019-08-10 13:05:54 2019-08-10 13:29:53 0:23:59 0:10:23 0:13:36 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on mira041 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204090 2019-08-09 23:12:27 2019-08-10 13:06:14 2019-08-10 13:36:13 0:29:59 0:10:41 0:19:18 mira master centos 7.6 rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira065 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204091 2019-08-09 23:12:28 2019-08-10 13:17:46 2019-08-10 13:35:45 0:17:59 0:06:09 0:11:50 mira master ubuntu 18.04 rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} 3
fail 4204092 2019-08-09 23:12:29 2019-08-10 13:19:31 2019-08-10 13:43:30 0:23:59 0:10:30 0:13:29 mira master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed on mira064 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204093 2019-08-09 23:12:30 2019-08-10 13:22:48 2019-08-10 13:56:47 0:33:59 0:11:52 0:22:07 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 4
Failure Reason:

Command failed on mira035 with status 1: "sudo yum -y install '' '' ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse"

fail 4204094 2019-08-09 23:12:30 2019-08-10 13:22:51 2019-08-10 13:44:50 0:21:59 0:10:19 0:11:40 mira master centos 7.6 rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira107 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204095 2019-08-09 23:12:31 2019-08-10 13:30:08 2019-08-10 13:54:07 0:23:59 0:10:22 0:13:37 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
Failure Reason:

Command failed on mira110 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204096 2019-08-09 23:12:32 2019-08-10 13:36:00 2019-08-10 13:57:59 0:21:59 0:10:30 0:11:29 mira master centos 7.6 rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira065 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204097 2019-08-09 23:12:33 2019-08-10 13:36:15 2019-08-10 14:00:14 0:23:59 0:12:16 0:11:43 mira master ubuntu 18.04 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
pass 4204098 2019-08-09 23:12:34 2019-08-10 13:43:32 2019-08-10 13:59:31 0:15:59 0:06:55 0:09:04 mira master ubuntu 18.04 rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 4204099 2019-08-09 23:12:34 2019-08-10 13:44:59 2019-08-10 14:12:58 0:27:59 0:14:35 0:13:24 mira master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/ssh_orchestrator.yaml} 2
pass 4204100 2019-08-09 23:12:35 2019-08-10 13:54:09 2019-08-10 14:22:08 0:27:59 0:18:28 0:09:31 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_omap_write.yaml} 1
fail 4204101 2019-08-09 23:12:36 2019-08-10 13:54:23 2019-08-10 14:18:22 0:23:59 0:10:21 0:13:38 mira master centos 7.6 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} 2
Failure Reason:

Command failed on mira041 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204102 2019-08-09 23:12:37 2019-08-10 13:56:49 2019-08-10 14:22:48 0:25:59 0:17:58 0:08:01 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on mira118 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204103 2019-08-09 23:12:38 2019-08-10 13:58:13 2019-08-10 14:20:13 0:22:00 0:10:21 0:11:39 mira master centos 7.6 rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira065 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204104 2019-08-09 23:12:38 2019-08-10 13:59:43 2019-08-10 14:25:42 0:25:59 0:18:03 0:07:56 mira master rhel 7.6 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
Failure Reason:

Command failed on mira064 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204105 2019-08-09 23:12:39 2019-08-10 14:00:15 2019-08-10 14:24:14 0:23:59 0:10:25 0:13:34 mira master centos 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_python.yaml} 2
Failure Reason:

Command failed on mira002 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204106 2019-08-09 23:12:40 2019-08-10 14:09:49 2019-08-10 16:43:51 2:34:02 2:15:53 0:18:09 mira master rhel 7.6 rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/one.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

Command failed on mira115 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204107 2019-08-09 23:12:41 2019-08-10 14:13:06 2019-08-10 15:19:06 1:06:00 0:57:29 0:08:31 mira master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/scrub.yaml} 1
fail 4204108 2019-08-09 23:12:42 2019-08-10 14:18:23 2019-08-10 14:46:23 0:28:00 0:19:48 0:08:12 mira master rhel 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on mira041 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204109 2019-08-09 23:12:42 2019-08-10 14:20:15 2019-08-10 14:42:14 0:21:59 0:10:24 0:11:35 mira master centos 7.6 rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira065 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204110 2019-08-09 23:12:43 2019-08-10 14:22:25 2019-08-10 14:44:24 0:21:59 0:10:09 0:11:50 mira master centos 7.6 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/sample_fio.yaml} 1
Failure Reason:

Command failed on mira118 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204111 2019-08-09 23:12:44 2019-08-10 14:22:50 2019-08-10 14:50:49 0:27:59 0:12:04 0:15:55 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
Failure Reason:

Command failed on mira110 with status 1: "sudo yum -y install '' '' ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse"

fail 4204112 2019-08-09 23:12:45 2019-08-10 14:24:16 2019-08-10 14:48:15 0:23:59 0:10:29 0:13:30 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} 2
Failure Reason:

Command failed on mira101 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204113 2019-08-09 23:12:46 2019-08-10 14:25:44 2019-08-10 14:59:44 0:34:00 0:23:41 0:10:19 mira master ubuntu 18.04 rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4204114 2019-08-09 23:12:47 2019-08-10 14:39:31 2019-08-10 15:03:30 0:23:59 0:18:03 0:05:56 mira master rhel 7.6 rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} 1
Failure Reason:

Command failed on mira117 with status 1: 'sudo yum -y install ceph-fuse'

dead 4204115 2019-08-09 23:12:47 2019-08-10 14:42:15 2019-08-11 02:44:37 12:02:22 mira master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
fail 4204116 2019-08-09 23:12:48 2019-08-10 14:44:26 2019-08-10 15:12:25 0:27:59 0:18:16 0:09:43 mira master rhel 7.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} 2
Failure Reason:

Command failed on mira088 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204117 2019-08-09 23:12:49 2019-08-10 14:46:38 2019-08-10 15:10:37 0:23:59 0:10:31 0:13:28 mira master centos 7.6 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} 2
Failure Reason:

Command failed on mira041 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204118 2019-08-09 23:12:50 2019-08-10 14:48:17 2019-08-10 15:10:16 0:21:59 0:10:33 0:11:26 mira master centos 7.6 rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira018 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204119 2019-08-09 23:12:51 2019-08-10 14:48:51 2019-08-10 15:10:50 0:21:59 0:10:25 0:11:34 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} 2
Failure Reason:

Command failed on mira066 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204120 2019-08-09 23:12:51 2019-08-10 14:50:50 2019-08-10 15:14:50 0:24:00 0:10:19 0:13:41 mira master centos 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_stress_watch.yaml} 2
Failure Reason:

Command failed on mira110 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204121 2019-08-09 23:12:52 2019-08-10 14:59:59 2019-08-10 15:15:58 0:15:59 0:07:45 0:08:14 mira master ubuntu 18.04 rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4204122 2019-08-09 23:12:53 2019-08-10 15:03:46 2019-08-10 15:19:45 0:15:59 0:06:44 0:09:15 mira master ubuntu 18.04 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_radosbench.yaml} 1
Failure Reason:

Command failed on mira117 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml'

fail 4204123 2019-08-09 23:12:54 2019-08-10 15:10:18 2019-08-10 15:34:17 0:23:59 0:10:32 0:13:27 mira master centos 7.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on mira041 with status 1: 'sudo yum -y install ceph-fuse'

pass 4204124 2019-08-09 23:12:54 2019-08-10 15:10:52 2019-08-10 15:46:51 0:35:59 0:25:05 0:10:54 mira master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} 2
fail 4204125 2019-08-09 23:12:55 2019-08-10 15:10:52 2019-08-10 15:30:51 0:19:59 0:10:07 0:09:52 mira master centos 7.6 rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira064 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204126 2019-08-09 23:12:56 2019-08-10 15:12:27 2019-08-10 15:32:26 0:19:59 0:10:04 0:09:55 mira master centos 7.6 rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

Command failed on mira082 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204127 2019-08-09 23:12:57 2019-08-10 15:15:05 2019-08-10 15:41:04 0:25:59 0:10:38 0:15:21 mira master centos 7.6 rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/mon_clock_with_skews.yaml} 3
Failure Reason:

Command failed on mira002 with status 1: 'sudo yum -y install ceph-fuse'

fail 4204128 2019-08-09 23:12:57 2019-08-10 15:16:00 2019-08-10 15:45:59 0:29:59 0:18:55 0:11:04 mira master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} 2
Failure Reason:

"2019-08-10T15:37:34.781975+0000 mon.a (mon.0) 1621 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log

fail 4204129 2019-08-09 23:12:58 2019-08-10 15:19:21 2019-08-10 15:55:21 0:36:00 0:12:37 0:23:23 mira master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
Failure Reason:

Command failed on mira088 with status 1: "sudo yum -y install '' '' ceph-radosgw ceph-test ceph ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-cloud ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-ssh ceph-fuse libcephfs2 libcephfs-devel librados2 librbd1 python-ceph rbd-fuse"