Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 3844638 2019-04-13 22:13:20 2019-04-14 06:11:36 2019-04-14 07:39:36 1:28:00 1:02:23 0:25:37 smithi master rhel 7.5 rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/mon_clock_no_skews.yaml} 3
Failure Reason:

Command failed on smithi085 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844639 2019-04-13 22:13:21 2019-04-14 06:11:36 2019-04-14 06:35:35 0:23:59 0:08:24 0:15:35 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed on smithi160 with status 1: '\n sudo yum -y install ceph-debuginfo\n '

fail 3844640 2019-04-13 22:13:21 2019-04-14 06:11:41 2019-04-14 09:57:44 3:46:03 3:22:38 0:23:25 smithi master ubuntu 18.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_cls_all.yaml} 2
Failure Reason:

Command failed (workunit test cls/test_cls_hello.sh) on smithi051 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bc48a1110e81c08487e9ffe233e34afce17ba66c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh'

pass 3844641 2019-04-13 22:13:22 2019-04-14 06:14:07 2019-04-14 07:02:11 0:48:04 0:24:45 0:23:19 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} 2
pass 3844642 2019-04-13 22:13:23 2019-04-14 06:15:47 2019-04-14 07:27:47 1:12:00 0:32:16 0:39:44 smithi master centos 7.5 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} 3
dead 3844643 2019-04-13 22:13:24 2019-04-14 06:15:47 2019-04-14 14:54:00 8:38:13 smithi master rhel 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 3844644 2019-04-13 22:13:25 2019-04-14 06:17:42 2019-04-14 06:37:42 0:20:00 0:10:00 0:10:00 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/cosbench_64K_write.yaml} 1
Failure Reason:

HTTPConnectionPool(host='smithi205.front.sepia.ceph.com', port=7280): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f8067f57610>: Failed to establish a new connection: [Errno 111] Connection refused',))

pass 3844645 2019-04-13 22:13:26 2019-04-14 06:17:42 2019-04-14 07:21:42 1:04:00 0:20:11 0:43:49 smithi master centos 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 3844646 2019-04-13 22:13:27 2019-04-14 06:19:47 2019-04-14 06:57:47 0:38:00 0:15:05 0:22:55 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} 2
Failure Reason:

"2019-04-14 06:46:08.447502 mon.a (mon.0) 111 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log

fail 3844647 2019-04-13 22:13:27 2019-04-14 06:19:47 2019-04-14 07:27:48 1:08:01 0:56:14 0:11:47 smithi master ubuntu 18.04 rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi139 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844648 2019-04-13 22:13:28 2019-04-14 06:21:38 2019-04-14 07:45:38 1:24:00 0:59:42 0:24:18 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
Failure Reason:

Command failed on smithi144 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844649 2019-04-13 22:13:29 2019-04-14 06:21:38 2019-04-14 08:13:39 1:52:01 1:04:37 0:47:24 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
Failure Reason:

Command failed on smithi068 with status 1: 'sudo ceph --cluster ceph osd crush tunables hammer'

pass 3844650 2019-04-13 22:13:30 2019-04-14 06:21:39 2019-04-14 06:51:39 0:30:00 0:15:38 0:14:22 smithi master centos 7.5 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/fio_4K_rand_read.yaml} 1
dead 3844651 2019-04-13 22:13:31 2019-04-14 06:23:26 2019-04-14 14:53:34 8:30:08 8:13:29 0:16:39 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml}
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=10070)

dead 3844652 2019-04-13 22:13:31 2019-04-14 06:23:37 2019-04-14 14:53:44 8:30:07 smithi master ubuntu 18.04 rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
pass 3844653 2019-04-13 22:13:32 2019-04-14 06:23:40 2019-04-14 07:09:40 0:46:00 0:14:05 0:31:55 smithi master centos 7.5 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/sync-many.yaml workloads/rados_5925.yaml} 2
pass 3844654 2019-04-13 22:13:33 2019-04-14 06:23:41 2019-04-14 06:51:41 0:28:00 0:10:21 0:17:39 smithi master centos 7.5 rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
fail 3844655 2019-04-13 22:13:34 2019-04-14 06:25:45 2019-04-14 08:23:46 1:58:01 1:43:16 0:14:45 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} 2
Failure Reason:

reached maximum tries (800) after waiting for 4800 seconds

fail 3844656 2019-04-13 22:13:35 2019-04-14 06:27:42 2019-04-14 07:35:42 1:08:00 0:56:31 0:11:29 smithi master ubuntu 18.04 rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi194 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844657 2019-04-13 22:13:35 2019-04-14 06:29:35 2019-04-14 07:45:35 1:16:00 1:02:09 0:13:51 smithi master rhel 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} tasks/rados_stress_watch.yaml} 2
Failure Reason:

Command failed on smithi201 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844658 2019-04-13 22:13:36 2019-04-14 06:29:40 2019-04-14 07:43:40 1:14:00 0:56:20 0:17:40 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3844659 2019-04-13 22:13:37 2019-04-14 06:31:31 2019-04-14 06:57:30 0:25:59 0:08:04 0:17:55 smithi master ubuntu 18.04 rados/multimon/{clusters/21.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} 3
fail 3844660 2019-04-13 22:13:38 2019-04-14 06:31:35 2019-04-14 07:57:36 1:26:01 1:04:18 0:21:43 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
Failure Reason:

Command failed on smithi037 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3844661 2019-04-13 22:13:39 2019-04-14 06:31:41 2019-04-14 07:11:41 0:40:00 0:17:15 0:22:45 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
pass 3844662 2019-04-13 22:13:39 2019-04-14 06:31:50 2019-04-14 07:13:49 0:41:59 0:21:57 0:20:02 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
pass 3844663 2019-04-13 22:13:40 2019-04-14 06:33:47 2019-04-14 07:03:46 0:29:59 0:11:33 0:18:26 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_striper.yaml} 2
fail 3844664 2019-04-13 22:13:41 2019-04-14 06:35:49 2019-04-14 07:49:49 1:14:00 1:02:35 0:11:25 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} 2
Failure Reason:

Command failed on smithi136 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844665 2019-04-13 22:13:42 2019-04-14 06:35:49 2019-04-14 08:03:50 1:28:01 1:00:21 0:27:40 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 4
Failure Reason:

Command failed on smithi046 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844666 2019-04-13 22:13:43 2019-04-14 06:37:11 2019-04-14 07:13:11 0:36:00 0:22:16 0:13:44 smithi master ubuntu 16.04 rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed on smithi161 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats'

pass 3844667 2019-04-13 22:13:44 2019-04-14 06:37:33 2019-04-14 07:29:33 0:52:00 0:35:34 0:16:26 smithi master centos 7.5 rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
dead 3844668 2019-04-13 22:13:44 2019-04-14 06:37:37 2019-04-14 14:53:49 8:16:12 8:00:23 0:15:49 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

psutil.NoSuchProcess process no longer exists (pid=20923)

pass 3844669 2019-04-13 22:13:45 2019-04-14 06:37:43 2019-04-14 07:25:43 0:48:00 0:31:12 0:16:48 smithi master centos 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
fail 3844670 2019-04-13 22:13:46 2019-04-14 06:41:49 2019-04-14 07:47:49 1:06:00 0:55:51 0:10:09 smithi master ubuntu 18.04 rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi027 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844671 2019-04-13 22:13:47 2019-04-14 06:41:49 2019-04-14 07:05:48 0:23:59 0:15:55 0:08:04 smithi master rhel 7.5 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_latest.yaml} workloads/fio_4M_rand_write.yaml} 1
Failure Reason:

HTTPConnectionPool(host='smithi191.front.sepia.ceph.com', port=7280): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f9bc6c7b990>: Failed to establish a new connection: [Errno 111] Connection refused',))

pass 3844672 2019-04-13 22:13:48 2019-04-14 06:41:49 2019-04-14 07:27:48 0:45:59 0:36:40 0:09:19 smithi master centos 7.5 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
fail 3844673 2019-04-13 22:13:49 2019-04-14 06:43:45 2019-04-14 07:19:44 0:35:59 0:21:13 0:14:46 smithi master rhel 7.5 rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{rhel_latest.yaml} thrashosds-health.yaml} 4
Failure Reason:

Command failed on smithi095 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 3844674 2019-04-13 22:13:49 2019-04-14 06:43:45 2019-04-14 07:03:44 0:19:59 0:08:23 0:11:36 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed on smithi049 with status 1: '\n sudo yum -y install ceph-debuginfo\n '

fail 3844675 2019-04-13 22:13:50 2019-04-14 06:43:45 2019-04-14 08:49:46 2:06:01 0:59:18 1:06:43 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 4
Failure Reason:

Command failed on smithi085 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3844676 2019-04-13 22:13:51 2019-04-14 06:43:45 2019-04-14 07:09:44 0:25:59 0:15:20 0:10:39 smithi master centos 7.5 rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
fail 3844677 2019-04-13 22:13:52 2019-04-14 06:43:47 2019-04-14 07:59:47 1:16:00 1:02:28 0:13:32 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} 2
Failure Reason:

Command failed on smithi109 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

pass 3844678 2019-04-13 22:13:53 2019-04-14 06:45:45 2019-04-14 07:21:50 0:36:05 0:27:31 0:08:34 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 3844679 2019-04-13 22:13:54 2019-04-14 06:45:45 2019-04-14 07:11:45 0:26:00 0:15:04 0:10:56 smithi master centos 7.5 rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_latest.yaml} workloads/radosbench_4K_rand_read.yaml} 1
pass 3844680 2019-04-13 22:13:55 2019-04-14 06:47:50 2019-04-14 07:33:49 0:45:59 0:32:09 0:13:50 smithi master centos 7.5 rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} 2
fail 3844681 2019-04-13 22:13:55 2019-04-14 06:51:48 2019-04-14 07:57:48 1:06:00 0:56:12 0:09:48 smithi master ubuntu 16.04 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_workunit_loadgen_mix.yaml} 2
Failure Reason:

Command failed on smithi162 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3844682 2019-04-13 22:13:56 2019-04-14 06:51:48 2019-04-14 14:53:56 8:02:08 smithi master rhel 7.5 rados/standalone/{supported-random-distro$/{rhel_latest.yaml} workloads/mon.yaml} 1
fail 3844683 2019-04-13 22:13:57 2019-04-14 06:51:48 2019-04-14 07:59:48 1:08:00 0:55:52 0:12:08 smithi master ubuntu 18.04 rados/singleton/{all/mon-auth-caps.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi065 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844684 2019-04-13 22:13:58 2019-04-14 06:51:48 2019-04-14 07:57:48 1:06:00 0:56:21 0:09:39 smithi master ubuntu 16.04 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/force-sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
Failure Reason:

Command failed on smithi078 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844685 2019-04-13 22:13:58 2019-04-14 06:53:15 2019-04-14 08:47:16 1:54:01 1:00:23 0:53:38 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} 4
Failure Reason:

Command failed on smithi002 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3844686 2019-04-13 22:13:59 2019-04-14 06:53:34 2019-04-14 14:53:42 8:00:08 smithi master ubuntu 18.04 rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} 3
fail 3844687 2019-04-13 22:14:00 2019-04-14 06:55:32 2019-04-14 08:01:32 1:06:00 0:56:00 0:10:00 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} 2
Failure Reason:

Command failed on smithi044 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844688 2019-04-13 22:14:01 2019-04-14 06:57:44 2019-04-14 07:21:43 0:23:59 0:10:36 0:13:23 smithi master centos 7.5 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_latest.yaml} tasks/progress.yaml} 2
Failure Reason:

Command failed on smithi181 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

pass 3844689 2019-04-13 22:14:02 2019-04-14 06:57:48 2019-04-14 07:41:48 0:44:00 0:30:47 0:13:13 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} 2
fail 3844690 2019-04-13 22:14:03 2019-04-14 06:59:44 2019-04-14 08:09:44 1:10:00 0:59:52 0:10:08 smithi master centos 7.5 rados/singleton/{all/mon-config.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed on smithi081 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844691 2019-04-13 22:14:03 2019-04-14 06:59:44 2019-04-14 08:07:44 1:08:00 0:56:14 0:11:46 smithi master ubuntu 18.04 rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} 2
Failure Reason:

Command failed on smithi141 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844692 2019-04-13 22:14:04 2019-04-14 06:59:44 2019-04-14 08:11:44 1:12:00 1:00:27 0:11:33 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
Failure Reason:

Command failed on smithi118 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844693 2019-04-13 22:14:05 2019-04-14 06:59:47 2019-04-14 08:15:48 1:16:01 1:02:05 0:13:56 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} 2
Failure Reason:

Command failed on smithi145 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3844694 2019-04-13 22:14:06 2019-04-14 07:01:50 2019-04-14 14:53:57 7:52:07 smithi master ubuntu 16.04 rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/osd.yaml} 1
dead 3844695 2019-04-13 22:14:07 2019-04-14 07:01:50 2019-04-14 14:53:57 7:52:07 smithi master rhel 7.5 rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
dead 3844696 2019-04-13 22:14:07 2019-04-14 07:02:13 2019-04-14 14:52:20 7:50:07 smithi master rhel 7.5 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 3844697 2019-04-13 22:14:08 2019-04-14 07:03:57 2019-04-14 08:10:02 1:06:05 0:55:53 0:10:12 smithi master ubuntu 16.04 rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed on smithi153 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844698 2019-04-13 22:14:09 2019-04-14 07:03:57 2019-04-14 08:09:57 1:06:00 0:56:10 0:09:50 smithi master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed on smithi191 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3844699 2019-04-13 22:14:10 2019-04-14 07:03:57 2019-04-14 14:54:05 7:50:08 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 2
fail 3844700 2019-04-13 22:14:11 2019-04-14 07:19:53 2019-04-14 08:31:53 1:12:00 0:59:15 0:12:45 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} 4
Failure Reason:

Command failed on smithi067 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844701 2019-04-13 22:14:11 2019-04-14 07:21:48 2019-04-14 07:45:48 0:24:00 0:10:19 0:13:41 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4M_write.yaml} 1
Failure Reason:

HTTPConnectionPool(host='smithi001.front.sepia.ceph.com', port=7280): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fccf5a41a10>: Failed to establish a new connection: [Errno 111] Connection refused',))

fail 3844702 2019-04-13 22:14:12 2019-04-14 07:21:49 2019-04-14 08:49:49 1:28:00 0:59:36 0:28:24 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/repair_test.yaml} 2
Failure Reason:

Command failed on smithi184 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844703 2019-04-13 22:14:13 2019-04-14 07:21:49 2019-04-14 08:33:49 1:12:00 1:02:02 0:09:58 smithi master rhel 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} 2
Failure Reason:

Command failed on smithi095 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844704 2019-04-13 22:14:14 2019-04-14 07:21:49 2019-04-14 08:31:49 1:10:00 1:01:38 0:08:22 smithi master rhel 7.5 rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi137 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844705 2019-04-13 22:14:15 2019-04-14 07:21:51 2019-04-14 08:09:51 0:48:00 0:12:42 0:35:18 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

Command failed on smithi148 with status 1: '\n sudo yum -y install ceph-debuginfo\n '

fail 3844706 2019-04-13 22:14:15 2019-04-14 07:25:41 2019-04-14 08:55:42 1:30:01 0:59:20 0:30:41 smithi master ubuntu 16.04 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} 4
Failure Reason:

Command failed on smithi135 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844707 2019-04-13 22:14:16 2019-04-14 07:25:41 2019-04-14 10:49:44 3:24:03 3:11:36 0:12:27 smithi master centos 7.5 rados/standalone/{supported-random-distro$/{centos_latest.yaml} workloads/scrub.yaml} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi187 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bc48a1110e81c08487e9ffe233e34afce17ba66c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

fail 3844708 2019-04-13 22:14:17 2019-04-14 07:25:44 2019-04-14 10:57:46 3:32:02 3:20:46 0:11:16 smithi master ubuntu 16.04 rados/singleton/{all/radostool.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_rados_tool.sh) on smithi139 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bc48a1110e81c08487e9ffe233e34afce17ba66c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_rados_tool.sh'

dead 3844709 2019-04-13 22:14:18 2019-04-14 07:27:59 2019-04-14 14:54:05 7:26:06 smithi master centos 7.5 rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} 2
dead 3844710 2019-04-13 22:14:19 2019-04-14 07:27:59 2019-04-14 14:54:05 7:26:06 smithi master ubuntu 18.04 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/one.yaml workloads/snaps-few-objects.yaml} 2
fail 3844711 2019-04-13 22:14:19 2019-04-14 07:27:59 2019-04-14 08:49:59 1:22:00 0:59:40 0:22:20 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} 2
Failure Reason:

Command failed on smithi060 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3844712 2019-04-13 22:14:20 2019-04-14 07:29:46 2019-04-14 14:53:52 7:24:06 smithi master rhel 7.5 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} 2
dead 3844713 2019-04-13 22:14:21 2019-04-14 07:29:46 2019-04-14 14:53:52 7:24:06 smithi master ubuntu 18.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} 2
fail 3844714 2019-04-13 22:14:22 2019-04-14 07:34:01 2019-04-14 08:44:01 1:10:00 1:01:25 0:08:35 smithi master rhel 7.5 rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml}} 1
Failure Reason:

Command failed on smithi132 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

dead 3844715 2019-04-13 22:14:23 2019-04-14 07:35:55 2019-04-14 14:54:01 7:18:06 smithi master centos 7.5 rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
dead 3844716 2019-04-13 22:14:23 2019-04-14 07:37:49 2019-04-14 14:53:56 7:16:07 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/scrub_test.yaml} 2
fail 3844717 2019-04-13 22:14:24 2019-04-14 07:39:49 2019-04-14 08:53:50 1:14:01 0:59:46 0:14:15 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} 2
Failure Reason:

Command failed on smithi117 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844718 2019-04-13 22:14:25 2019-04-14 07:39:49 2019-04-14 11:17:52 3:38:03 3:24:32 0:13:31 smithi master centos 7.5 rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_alloc_hint.sh) on smithi177 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bc48a1110e81c08487e9ffe233e34afce17ba66c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_alloc_hint.sh'

fail 3844719 2019-04-13 22:14:26 2019-04-14 07:39:52 2019-04-14 08:05:52 0:26:00 0:14:30 0:11:30 smithi master centos rados/singleton-flat/valgrind-leaks.yaml 1
Failure Reason:

Command failed on smithi022 with status 1: '\n sudo yum -y install ceph-debuginfo\n '

fail 3844720 2019-04-13 22:14:27 2019-04-14 07:40:06 2019-04-14 08:18:05 0:37:59 0:13:03 0:24:56 smithi master ubuntu 16.04 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashosds-health.yaml} 4
Failure Reason:

Command failed on smithi158 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_health_to_clog=true"

fail 3844721 2019-04-13 22:14:27 2019-04-14 07:42:00 2019-04-14 08:54:01 1:12:01 0:59:02 0:12:59 smithi master centos 7.5 rados/singleton/{all/test-crash.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_latest.yaml}} 1
Failure Reason:

Command failed on smithi159 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844722 2019-04-13 22:14:28 2019-04-14 07:43:53 2019-04-14 08:49:53 1:06:00 0:55:58 0:10:02 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} 2
Failure Reason:

Command failed on smithi194 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844723 2019-04-13 22:14:29 2019-04-14 07:45:48 2019-04-14 08:59:48 1:14:00 1:02:10 0:11:50 smithi master rhel 7.5 rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_latest.yaml} thrashers/sync-many.yaml workloads/pool-create-delete.yaml} 2
Failure Reason:

Command failed on smithi088 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844724 2019-04-13 22:14:30 2019-04-14 07:45:48 2019-04-14 08:55:48 1:10:00 0:56:16 0:13:44 smithi master ubuntu 16.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
Failure Reason:

Command failed on smithi136 with status 1: 'sudo ceph --cluster ceph osd crush tunables jewel'

fail 3844725 2019-04-13 22:14:31 2019-04-14 07:45:48 2019-04-14 08:07:47 0:21:59 0:10:05 0:11:54 smithi master ubuntu 16.04 rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4K_rand_read.yaml} 1
Failure Reason:

HTTPConnectionPool(host='smithi027.front.sepia.ceph.com', port=7280): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7efc4533b9d0>: Failed to establish a new connection: [Errno 111] Connection refused',))

fail 3844726 2019-04-13 22:14:32 2019-04-14 07:45:49 2019-04-14 09:19:50 1:34:01 0:59:47 0:34:14 smithi master centos 7.5 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} tasks/rados_cls_all.yaml} 2
Failure Reason:

Command failed on smithi154 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844727 2019-04-13 22:14:32 2019-04-14 07:47:54 2019-04-14 08:55:54 1:08:00 0:56:17 0:11:43 smithi master ubuntu 16.04 rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 2
Failure Reason:

Command failed on smithi026 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'

fail 3844728 2019-04-13 22:14:33 2019-04-14 07:47:54 2019-04-14 09:25:55 1:38:01 1:07:59 0:30:02 smithi master centos 7.6 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} 4
Failure Reason:

Command failed on smithi071 with status 1: 'sudo ceph --cluster ceph osd crush tunables hammer'

dead 3844729 2019-04-13 22:14:34 2019-04-14 07:47:54 2019-04-14 14:54:00 7:06:06 smithi master centos 7.5 rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{centos_latest.yaml}} 1
fail 3844730 2019-04-13 22:14:35 2019-04-14 07:50:02 2019-04-14 08:08:01 0:17:59 0:06:55 0:11:04 smithi master ubuntu 18.04 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/failover.yaml} 2
Failure Reason:

Command failed on smithi175 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0'

fail 3844731 2019-04-13 22:14:35 2019-04-14 07:50:02 2019-04-14 11:16:04 3:26:02 3:13:22 0:12:40 smithi master ubuntu 18.04 rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/erasure-code.yaml} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi133 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bc48a1110e81c08487e9ffe233e34afce17ba66c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh'

dead 3844732 2019-04-13 22:14:36 2019-04-14 07:51:51 2019-04-14 14:53:57 7:02:06 smithi master ubuntu 16.04 rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} 2
fail 3844733 2019-04-13 22:14:37 2019-04-14 07:51:51 2019-04-14 09:23:52 1:32:01 0:59:55 0:32:06 smithi master centos 7.5 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} 2
Failure Reason:

Command failed on smithi130 with status 1: 'sudo ceph --cluster ceph osd crush tunables default'