Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 4422714 2019-10-18 18:03:16 2019-10-19 13:44:55 2019-10-20 01:47:17 12:02:22 smithi master centos 7.6 rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
fail 4422715 2019-10-18 18:03:17 2019-10-19 13:44:55 2019-10-19 14:18:54 0:33:59 0:22:19 0:11:40 smithi master rhel 7.7 rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/prometheus.yaml} 2
Failure Reason:

"2019-10-19T14:07:29.336973+0000 mon.b (mon.0) 95 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log

fail 4422716 2019-10-18 18:03:18 2019-10-19 13:48:11 2019-10-19 23:54:22 10:06:11 9:52:45 0:13:26 smithi master ubuntu 18.04 rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} 2
Failure Reason:

reached maximum tries (800) after waiting for 4800 seconds

dead 4422717 2019-10-18 18:03:19 2019-10-19 13:48:39 2019-10-20 01:51:10 12:02:31 smithi master rhel 7.7 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} 2
dead 4422718 2019-10-18 18:03:20 2019-10-19 13:50:11 2019-10-20 01:52:39 12:02:28 smithi master ubuntu 18.04 rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
fail 4422719 2019-10-18 18:03:21 2019-10-19 13:50:11 2019-10-19 14:16:11 0:26:00 0:18:35 0:07:25 smithi master rhel 7.7 rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4M_write.yaml} 1
pass 4422720 2019-10-18 18:03:22 2019-10-19 13:50:36 2019-10-19 15:06:36 1:16:00 1:03:51 0:12:09 smithi master rhel 7.7 rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/many.yaml workloads/rados_mon_osdmap_prune.yaml} 2
pass 4422721 2019-10-18 18:03:23 2019-10-19 13:52:13 2019-10-19 14:16:12 0:23:59 0:15:26 0:08:33 smithi master rhel 7.7 rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_no_skews.yaml} 2
pass 4422722 2019-10-18 18:03:24 2019-10-19 13:52:32 2019-10-19 14:16:31 0:23:59 0:13:33 0:10:26 smithi master rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} 2
dead 4422723 2019-10-18 18:03:25 2019-10-19 13:52:34 2019-10-20 01:55:00 12:02:26 smithi master centos 7.6 rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_striper.yaml} 2
fail 4422724 2019-10-18 18:03:26 2019-10-19 13:52:39 2019-10-19 17:24:42 3:32:03 3:22:04 0:09:59 smithi master ubuntu 18.04 rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi074 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 4422725 2019-10-18 18:03:27 2019-10-19 13:53:12 2019-10-19 17:29:15 3:36:03 3:28:17 0:07:46 smithi master rhel 7.7 rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{rhel_7.yaml}} 1
Failure Reason:

Command failed (workunit test rados/test_alloc_hint.sh) on smithi149 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_alloc_hint.sh'

fail 4422726 2019-10-18 18:03:28 2019-10-19 13:53:30 2019-10-19 14:29:30 0:36:00 0:22:36 0:13:24 smithi master ubuntu 18.04 rados/rest/{mgr-restful.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1
Failure Reason:

"2019-10-19T14:11:25.962770+0000 mon.a (mon.0) 146 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log

pass 4422727 2019-10-18 18:03:29 2019-10-19 13:53:45 2019-10-19 14:21:45 0:28:00 0:16:56 0:11:04 smithi master centos rados/singleton-flat/valgrind-leaks.yaml 1
fail 4422728 2019-10-18 18:03:29 2019-10-19 13:53:50 2019-10-19 14:29:50 0:36:00 0:25:41 0:10:19 smithi master centos 7.6 rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} 1
Failure Reason:

"2019-10-19T14:13:35.732354+0000 mon.a (mon.0) 131 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log

fail 4422729 2019-10-18 18:03:30 2019-10-19 13:53:53 2019-10-19 17:17:56 3:24:03 3:15:35 0:08:28 smithi master rhel 7.7 rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/crush.yaml} 1
Failure Reason:

Command failed (workunit test crush/crush-classes.sh) on smithi135 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-classes.sh'

dead 4422730 2019-10-18 18:03:31 2019-10-19 13:55:48 2019-10-20 01:58:17 12:02:29 smithi master rhel 7.7 rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} 4
fail 4422731 2019-10-18 18:03:32 2019-10-19 13:55:52 2019-10-19 14:31:51 0:35:59 0:20:51 0:15:08 smithi master rhel 7.7 rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} 4
Failure Reason:

Command failed on smithi088 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_health_to_clog=false"

dead 4422732 2019-10-18 18:03:33 2019-10-19 13:56:21 2019-10-20 01:58:48 12:02:27 smithi master rhel 7.7 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} 2
pass 4422733 2019-10-18 18:03:34 2019-10-19 13:58:12 2019-10-19 14:16:12 0:18:00 0:09:31 0:08:29 smithi master ubuntu 18.04 rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} 1