User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sjust | 2019-10-17 19:24:08 | 2019-10-18 14:27:11 | 2019-10-19 03:37:43 | 13:10:32 | rados | wip-sjust-testing2 | smithi | f0009d4 | 36 | 70 | 116 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 4418121 | 2019-10-17 19:24:27 | 2019-10-17 19:27:45 | 2019-10-18 07:30:09 | 12:02:24 | smithi | master | centos | 7.6 | rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
fail | 4418122 | 2019-10-17 19:24:28 | 2019-10-17 19:27:45 | 2019-10-17 20:13:46 | 0:46:01 | 0:32:12 | 0:13:49 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/prometheus.yaml} | 2 | |
Failure Reason:
"2019-10-17T19:46:55.958941+0000 mon.b (mon.0) 147 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 4418123 | 2019-10-17 19:24:29 | 2019-10-17 19:27:46 | 2019-10-18 07:30:09 | 12:02:23 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |||
dead | 4418124 | 2019-10-17 19:24:30 | 2019-10-17 19:27:53 | 2019-10-18 07:30:22 | 12:02:29 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |||
dead | 4418125 | 2019-10-17 19:24:31 | 2019-10-17 19:30:49 | 2019-10-18 07:33:12 | 12:02:23 | smithi | master | rhel | 7.7 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |||
fail | 4418126 | 2019-10-17 19:24:32 | 2019-10-17 19:30:49 | 2019-10-17 20:00:49 | 0:30:00 | 0:14:19 | 0:15:41 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
pass | 4418127 | 2019-10-17 19:24:33 | 2019-10-17 19:30:56 | 2019-10-17 20:40:57 | 1:10:01 | 0:57:14 | 0:12:47 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
pass | 4418128 | 2019-10-17 19:24:34 | 2019-10-17 19:34:35 | 2019-10-17 19:54:34 | 0:19:59 | 0:07:56 | 0:12:03 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
pass | 4418129 | 2019-10-17 19:24:35 | 2019-10-17 19:34:35 | 2019-10-17 19:58:34 | 0:23:59 | 0:12:28 | 0:11:31 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |||
dead | 4418130 | 2019-10-17 19:24:36 | 2019-10-17 19:35:00 | 2019-10-18 07:37:30 | 12:02:30 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_striper.yaml} | 2 | |||
fail | 4418131 | 2019-10-17 19:24:37 | 2019-10-17 19:35:00 | 2019-10-17 23:15:04 | 3:40:04 | 3:24:30 | 0:15:34 | smithi | master | centos | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi133 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 4418132 | 2019-10-17 19:24:38 | 2019-10-17 19:35:34 | 2019-10-17 23:07:37 | 3:32:03 | 3:21:32 | 0:10:31 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_alloc_hint.sh) on smithi094 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_alloc_hint.sh' |
||||||||||||||
fail | 4418133 | 2019-10-17 19:24:39 | 2019-10-17 19:36:32 | 2019-10-17 20:38:32 | 1:02:00 | 0:44:14 | 0:17:46 | smithi | master | centos | 7.6 | rados/rest/{mgr-restful.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
"2019-10-17T20:02:04.902195+0000 mon.a (mon.0) 115 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 4418134 | 2019-10-17 19:24:40 | 2019-10-17 19:36:39 | 2019-10-17 20:10:39 | 0:34:00 | 0:16:54 | 0:17:06 | smithi | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
fail | 4418135 | 2019-10-17 19:24:41 | 2019-10-17 19:39:56 | 2019-10-17 20:21:50 | 0:41:54 | 0:25:40 | 0:16:14 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
"2019-10-17T20:04:04.330616+0000 mon.a (mon.0) 132 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418136 | 2019-10-17 19:24:42 | 2019-10-17 19:39:59 | 2019-10-17 22:59:58 | 3:19:59 | 3:09:19 | 0:10:40 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/crush.yaml} | 1 | |
Failure Reason:
Command failed (workunit test crush/crush-classes.sh) on smithi050 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-classes.sh' |
||||||||||||||
dead | 4418137 | 2019-10-17 19:24:43 | 2019-10-17 19:40:14 | 2019-10-18 07:42:39 | 12:02:25 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
fail | 4418138 | 2019-10-17 19:24:44 | 2019-10-17 19:40:43 | 2019-10-17 20:10:41 | 0:29:58 | 0:13:18 | 0:16:40 | smithi | master | ubuntu | 18.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
Command failed on smithi017 with status 124: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_health_to_clog=false" |
||||||||||||||
dead | 4418139 | 2019-10-17 19:24:45 | 2019-10-17 19:41:20 | 2019-10-18 07:43:47 | 12:02:27 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
pass | 4418140 | 2019-10-17 19:24:46 | 2019-10-17 19:41:21 | 2019-10-17 20:07:21 | 0:26:00 | 0:12:16 | 0:13:44 | smithi | master | centos | 7.6 | rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
dead | 4418141 | 2019-10-17 19:24:47 | 2019-10-17 19:42:08 | 2019-10-18 07:44:35 | 12:02:27 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |||
dead | 4418142 | 2019-10-17 19:24:48 | 2019-10-17 19:43:11 | 2019-10-18 07:45:41 | 12:02:30 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
fail | 4418143 | 2019-10-17 19:24:49 | 2019-10-17 19:43:11 | 2019-10-17 20:13:10 | 0:29:59 | 0:16:04 | 0:13:55 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
Failure Reason:
"2019-10-17T20:01:18.446147+0000 mon.b (mon.0) 203 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418144 | 2019-10-17 19:24:50 | 2019-10-17 19:45:07 | 2019-10-17 20:07:06 | 0:21:59 | 0:12:01 | 0:09:58 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
dead | 4418145 | 2019-10-17 19:24:51 | 2019-10-17 19:45:31 | 2019-10-18 07:47:57 | 12:02:26 | smithi | master | centos | 7.6 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
dead | 4418146 | 2019-10-17 19:24:52 | 2019-10-17 19:45:44 | 2019-10-18 07:48:06 | 12:02:22 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |||
dead | 4418147 | 2019-10-17 19:24:53 | 2019-10-17 19:46:33 | 2019-10-18 07:48:59 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |||
dead | 4418148 | 2019-10-17 19:24:54 | 2019-10-17 19:46:33 | 2019-10-18 07:48:56 | 12:02:23 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
fail | 4418149 | 2019-10-17 19:24:55 | 2019-10-17 19:46:36 | 2019-10-17 20:24:36 | 0:38:00 | 0:30:19 | 0:07:41 | smithi | master | rhel | 7.7 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-10-17T20:07:58.743369+0000 mon.a (mon.0) 150 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 4418150 | 2019-10-17 19:24:56 | 2019-10-17 19:47:05 | 2019-10-18 07:49:27 | 12:02:22 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
pass | 4418151 | 2019-10-17 19:24:57 | 2019-10-17 19:48:02 | 2019-10-17 20:08:01 | 0:19:59 | 0:13:44 | 0:06:15 | smithi | master | rhel | 7.7 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4418152 | 2019-10-17 19:24:58 | 2019-10-17 19:48:25 | 2019-10-17 23:16:27 | 3:28:02 | 3:14:39 | 0:13:23 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_big.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-big.sh) on smithi114 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/load-gen-big.sh' |
||||||||||||||
dead | 4418153 | 2019-10-17 19:24:59 | 2019-10-17 19:50:00 | 2019-10-18 07:52:28 | 12:02:28 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
fail | 4418154 | 2019-10-17 19:24:59 | 2019-10-17 19:50:16 | 2019-10-17 20:14:15 | 0:23:59 | 0:11:44 | 0:12:15 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_fio.yaml} | 1 | |
dead | 4418155 | 2019-10-17 19:25:00 | 2019-10-17 19:52:09 | 2019-10-18 07:54:35 | 12:02:26 | smithi | master | rhel | 7.7 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |||
dead | 4418156 | 2019-10-17 19:25:01 | 2019-10-17 19:52:09 | 2019-10-18 07:54:32 | 12:02:23 | 11:45:08 | 0:17:15 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
Failure Reason:
SSH connection to smithi064 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
dead | 4418157 | 2019-10-17 19:25:02 | 2019-10-17 19:52:43 | 2019-10-18 07:55:11 | 12:02:28 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
dead | 4418158 | 2019-10-17 19:25:03 | 2019-10-17 19:53:25 | 2019-10-18 07:55:49 | 12:02:24 | 11:54:27 | 0:07:57 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=20639) |
||||||||||||||
dead | 4418159 | 2019-10-17 19:25:04 | 2019-10-17 19:54:54 | 2019-10-18 07:59:23 | 12:04:29 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |||
fail | 4418160 | 2019-10-17 19:25:05 | 2019-10-17 19:55:07 | 2019-10-17 20:25:06 | 0:29:59 | 0:19:00 | 0:10:59 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
"2019-10-17T20:14:22.242028+0000 mon.b (mon.0) 95 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
pass | 4418161 | 2019-10-17 19:25:07 | 2019-10-17 19:55:35 | 2019-10-17 20:17:34 | 0:21:59 | 0:14:33 | 0:07:26 | smithi | master | rhel | 7.7 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4418162 | 2019-10-17 19:25:08 | 2019-10-17 19:55:40 | 2019-10-18 07:58:02 | 12:02:22 | smithi | master | centos | 7.6 | rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
pass | 4418163 | 2019-10-17 19:25:09 | 2019-10-17 19:58:39 | 2019-10-17 20:36:39 | 0:38:00 | 0:27:52 | 0:10:08 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/one.yaml workloads/rados_mon_workunits.yaml} | 2 | |
dead | 4418164 | 2019-10-17 19:25:10 | 2019-10-17 19:58:40 | 2019-10-18 08:01:08 | 12:02:28 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
fail | 4418165 | 2019-10-17 19:25:11 | 2019-10-17 19:58:59 | 2019-10-17 23:27:02 | 3:28:03 | 3:20:52 | 0:07:11 | smithi | master | rhel | 7.7 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on smithi151 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
fail | 4418166 | 2019-10-17 19:25:12 | 2019-10-17 19:59:03 | 2019-10-17 20:23:02 | 0:23:59 | 0:17:29 | 0:06:30 | smithi | master | rhel | 7.7 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/sample_radosbench.yaml} | 1 | |
pass | 4418167 | 2019-10-17 19:25:13 | 2019-10-17 20:00:48 | 2019-10-17 20:20:47 | 0:19:59 | 0:11:26 | 0:08:33 | smithi | master | centos | 7.6 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4418168 | 2019-10-17 19:25:14 | 2019-10-17 20:00:51 | 2019-10-17 20:26:51 | 0:26:00 | 0:08:12 | 0:17:48 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
fail | 4418169 | 2019-10-17 19:25:15 | 2019-10-17 20:01:41 | 2019-10-18 02:53:48 | 6:52:07 | 6:35:48 | 0:16:19 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi154 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 4418170 | 2019-10-17 19:25:16 | 2019-10-17 20:06:19 | 2019-10-17 23:26:22 | 3:20:03 | 3:09:28 | 0:10:35 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/erasure-code.yaml} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-code-plugins.sh) on smithi138 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code-plugins.sh' |
||||||||||||||
dead | 4418171 | 2019-10-17 19:25:17 | 2019-10-17 20:07:08 | 2019-10-18 08:09:31 | 12:02:23 | 10:54:13 | 1:08:10 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=13746) |
||||||||||||||
dead | 4418172 | 2019-10-17 19:25:18 | 2019-10-17 20:07:14 | 2019-10-18 08:09:37 | 12:02:23 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
pass | 4418173 | 2019-10-17 19:25:19 | 2019-10-17 20:07:39 | 2019-10-17 20:33:38 | 0:25:59 | 0:15:42 | 0:10:17 | smithi | master | centos | 7.6 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4418174 | 2019-10-17 19:25:20 | 2019-10-17 20:08:02 | 2019-10-17 20:28:01 | 0:19:59 | 0:13:07 | 0:06:52 | smithi | master | rhel | 7.7 | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4418175 | 2019-10-17 19:25:21 | 2019-10-17 20:08:25 | 2019-10-18 08:10:51 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |||
dead | 4418176 | 2019-10-17 19:25:22 | 2019-10-17 20:08:33 | 2019-10-18 08:10:59 | 12:02:26 | 11:50:50 | 0:11:36 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | — | |
Failure Reason:
SSH connection to smithi203 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
dead | 4418177 | 2019-10-17 19:25:23 | 2019-10-17 20:09:01 | 2019-10-18 08:11:29 | 12:02:28 | 11:50:53 | 0:11:35 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=10378) |
||||||||||||||
dead | 4418178 | 2019-10-17 19:25:24 | 2019-10-17 20:09:02 | 2019-10-18 08:11:28 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
dead | 4418179 | 2019-10-17 19:25:25 | 2019-10-17 20:10:21 | 2019-10-18 08:12:43 | 12:02:22 | 8:40:40 | 3:21:42 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=29448) |
||||||||||||||
fail | 4418180 | 2019-10-17 19:25:26 | 2019-10-17 20:10:43 | 2019-10-17 20:38:41 | 0:27:58 | 0:16:12 | 0:11:46 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
"2019-10-17T20:27:30.145310+0000 mon.a (mon.0) 164 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418181 | 2019-10-17 19:25:27 | 2019-10-17 20:10:43 | 2019-10-17 20:36:42 | 0:25:59 | 0:16:48 | 0:09:11 | smithi | master | rhel | 7.7 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
pass | 4418182 | 2019-10-17 19:25:28 | 2019-10-17 20:13:29 | 2019-10-17 20:39:29 | 0:26:00 | 0:17:55 | 0:08:05 | smithi | master | rhel | 7.7 | rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4418183 | 2019-10-17 19:25:29 | 2019-10-17 20:13:29 | 2019-10-17 23:43:32 | 3:30:03 | 3:14:40 | 0:15:23 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mostlyread.sh) on smithi072 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mostlyread.sh' |
||||||||||||||
dead | 4418184 | 2019-10-17 19:25:30 | 2019-10-17 20:13:48 | 2019-10-18 08:16:15 | 12:02:27 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
pass | 4418185 | 2019-10-17 19:25:31 | 2019-10-17 20:14:17 | 2019-10-17 20:34:16 | 0:19:59 | 0:08:23 | 0:11:36 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
dead | 4418186 | 2019-10-17 19:25:32 | 2019-10-17 20:17:24 | 2019-10-18 08:19:46 | 12:02:22 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |||
pass | 4418187 | 2019-10-17 19:25:33 | 2019-10-17 20:17:29 | 2019-10-17 20:39:29 | 0:22:00 | 0:14:09 | 0:07:51 | smithi | master | rhel | 7.7 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4418188 | 2019-10-17 19:25:34 | 2019-10-17 20:17:36 | 2019-10-18 08:19:58 | 12:02:22 | 11:49:54 | 0:12:28 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
SSH connection to smithi060 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op cache_try_flush 50 --op cache_flush 50 --op cache_evict 50 --op delete 50 --pool base' |
||||||||||||||
fail | 4418189 | 2019-10-17 19:25:35 | 2019-10-17 20:20:21 | 2019-10-17 23:56:23 | 3:36:02 | 3:22:02 | 0:14:00 | smithi | master | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi012 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 4418190 | 2019-10-17 19:25:36 | 2019-10-17 20:20:48 | 2019-10-18 08:23:15 | 12:02:27 | smithi | master | rhel | 7.7 | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |||
fail | 4418191 | 2019-10-17 19:25:37 | 2019-10-17 20:22:08 | 2019-10-17 20:46:07 | 0:23:59 | 0:14:05 | 0:09:54 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
pass | 4418192 | 2019-10-17 19:25:38 | 2019-10-17 20:23:04 | 2019-10-17 20:47:03 | 0:23:59 | 0:16:18 | 0:07:41 | smithi | master | rhel | 7.7 | rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4418193 | 2019-10-17 19:25:39 | 2019-10-17 20:24:54 | 2019-10-18 08:27:17 | 12:02:23 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |||
dead | 4418194 | 2019-10-17 19:25:40 | 2019-10-17 20:25:08 | 2019-10-18 08:27:34 | 12:02:26 | smithi | master | rhel | 7.7 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/sync-many.yaml workloads/snaps-few-objects.yaml} | 2 | |||
fail | 4418195 | 2019-10-17 19:25:41 | 2019-10-17 20:25:21 | 2019-10-17 20:55:21 | 0:30:00 | 0:21:54 | 0:08:06 | smithi | master | rhel | 7.7 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
"2019-10-17T20:44:57.556811+0000 mon.b (mon.0) 165 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 4418196 | 2019-10-17 19:25:42 | 2019-10-17 20:27:08 | 2019-10-18 08:29:21 | 12:02:13 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | — | |||
dead | 4418197 | 2019-10-17 19:25:43 | 2019-10-17 20:28:03 | 2019-10-18 08:30:29 | 12:02:26 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/readwrite.yaml} | 2 | |||
pass | 4418198 | 2019-10-17 19:25:44 | 2019-10-17 20:28:03 | 2019-10-17 20:54:02 | 0:25:59 | 0:19:55 | 0:06:04 | smithi | master | rhel | 7.7 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4418199 | 2019-10-17 19:25:45 | 2019-10-17 20:33:00 | 2019-10-18 08:35:26 | 12:02:26 | 11:51:33 | 0:10:53 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=14846) |
||||||||||||||
fail | 4418200 | 2019-10-17 19:25:46 | 2019-10-17 20:33:39 | 2019-10-17 20:57:38 | 0:23:59 | 0:17:34 | 0:06:25 | smithi | master | rhel | 7.7 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
fail | 4418201 | 2019-10-17 19:25:47 | 2019-10-17 20:34:34 | 2019-10-17 21:20:33 | 0:45:59 | 0:39:45 | 0:06:14 | smithi | master | rhel | 7.7 | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-10-17T20:54:33.350655+0000 mon.a (mon.0) 133 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 4418202 | 2019-10-17 19:25:48 | 2019-10-17 20:36:56 | 2019-10-17 20:56:55 | 0:19:59 | 0:09:10 | 0:10:49 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/mon-config.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4418203 | 2019-10-17 19:25:49 | 2019-10-17 20:36:56 | 2019-10-17 21:54:57 | 1:18:01 | 0:23:34 | 0:54:27 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/21.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} | 3 | |
dead | 4418204 | 2019-10-17 19:25:50 | 2019-10-17 20:38:50 | 2019-10-18 08:43:18 | 12:04:28 | 11:51:02 | 0:13:26 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | — | |||
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on smithi198 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
dead | 4418205 | 2019-10-17 19:25:51 | 2019-10-17 20:38:50 | 2019-10-18 08:41:16 | 12:02:26 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
dead | 4418206 | 2019-10-17 19:25:52 | 2019-10-17 20:39:30 | 2019-10-18 08:41:57 | 12:02:27 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
pass | 4418207 | 2019-10-17 19:25:53 | 2019-10-17 20:39:30 | 2019-10-17 23:27:32 | 2:48:02 | 2:35:11 | 0:12:51 | smithi | master | centos | 7.6 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4418208 | 2019-10-17 19:25:54 | 2019-10-17 20:41:16 | 2019-10-17 21:07:15 | 0:25:59 | 0:14:12 | 0:11:47 | smithi | master | rhel | 7.7 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/misc.yaml} | 1 | |
Failure Reason:
Command failed (workunit test misc/network-ping.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/network-ping.sh' |
||||||||||||||
dead | 4418209 | 2019-10-17 19:25:55 | 2019-10-17 20:41:17 | 2019-10-18 08:43:45 | 12:02:28 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |||
dead | 4418210 | 2019-10-17 19:25:56 | 2019-10-17 20:46:25 | 2019-10-18 08:48:49 | 12:02:24 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |||
dead | 4418211 | 2019-10-17 19:25:57 | 2019-10-17 20:47:05 | 2019-10-18 08:49:32 | 12:02:27 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
dead | 4418212 | 2019-10-17 19:25:58 | 2019-10-17 20:50:34 | 2019-10-18 08:53:01 | 12:02:27 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |||
dead | 4418213 | 2019-10-17 19:25:59 | 2019-10-17 20:50:34 | 2019-10-18 08:53:01 | 12:02:27 | smithi | master | centos | 7.6 | rados/singleton/{all/osd-backfill.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
fail | 4418214 | 2019-10-17 19:26:00 | 2019-10-17 20:53:23 | 2019-10-17 22:23:23 | 1:30:00 | 1:18:44 | 0:11:16 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
fail | 4418215 | 2019-10-17 19:26:01 | 2019-10-17 20:54:03 | 2019-10-17 22:24:04 | 1:30:01 | 0:49:18 | 0:40:43 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/failover.yaml} | 2 | |
Failure Reason:
"2019-10-17T21:47:11.000618+0000 mon.b (mon.0) 94 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418216 | 2019-10-17 19:26:02 | 2019-10-17 20:55:39 | 2019-10-17 21:19:39 | 0:24:00 | 0:11:48 | 0:12:12 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
dead | 4418217 | 2019-10-17 19:26:03 | 2019-10-17 20:57:13 | 2019-10-18 08:59:28 | 12:02:15 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | — | |||
dead | 4418218 | 2019-10-17 19:26:04 | 2019-10-17 20:57:40 | 2019-10-18 09:00:02 | 12:02:22 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/repair_test.yaml} | 2 | |||
pass | 4418219 | 2019-10-17 19:26:06 | 2019-10-17 20:59:32 | 2019-10-17 21:31:31 | 0:31:59 | 0:15:18 | 0:16:41 | smithi | master | rhel | 7.7 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4418220 | 2019-10-17 19:26:07 | 2019-10-17 21:03:34 | 2019-10-18 09:05:56 | 12:02:22 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
dead | 4418221 | 2019-10-17 19:26:08 | 2019-10-17 21:07:54 | 2019-10-18 09:10:16 | 12:02:22 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |||
fail | 4418222 | 2019-10-17 19:26:09 | 2019-10-17 21:12:48 | 2019-10-17 21:48:44 | 0:35:56 | 0:29:39 | 0:06:17 | smithi | master | rhel | 7.7 | rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed on smithi173 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell osd.1 injectargs --objectstore-blackhole=true' |
||||||||||||||
dead | 4418223 | 2019-10-17 19:26:10 | 2019-10-17 21:12:49 | 2019-10-18 09:15:08 | 12:02:19 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |||
fail | 4418224 | 2019-10-17 19:26:11 | 2019-10-17 21:20:00 | 2019-10-17 21:45:59 | 0:25:59 | 0:14:09 | 0:11:50 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4M_rand_read.yaml} | 1 | |
dead | 4418225 | 2019-10-17 19:26:12 | 2019-10-17 21:20:35 | 2019-10-18 09:23:10 | 12:02:35 | 11:36:20 | 0:26:15 | smithi | master | rhel | 7.7 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/sync.yaml workloads/pool-create-delete.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=29739) |
||||||||||||||
fail | 4418226 | 2019-10-17 19:26:13 | 2019-10-17 21:20:40 | 2019-10-18 01:04:43 | 3:44:03 | 3:26:31 | 0:17:32 | smithi | master | rhel | 7.7 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_large_omap_detection.py) on smithi134 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_large_omap_detection.py' |
||||||||||||||
dead | 4418227 | 2019-10-17 19:26:14 | 2019-10-17 21:24:17 | 2019-10-18 09:26:44 | 12:02:27 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |||
pass | 4418228 | 2019-10-17 19:26:15 | 2019-10-17 21:31:50 | 2019-10-17 21:59:49 | 0:27:59 | 0:15:41 | 0:12:18 | smithi | master | centos | 7.6 | rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4418229 | 2019-10-17 19:26:16 | 2019-10-17 21:34:40 | 2019-10-17 22:22:40 | 0:48:00 | 0:21:34 | 0:26:26 | smithi | master | rhel | 7.7 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
"2019-10-17T22:11:50.410353+0000 mon.b (mon.0) 158 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418230 | 2019-10-17 19:26:17 | 2019-10-17 21:42:18 | 2019-10-17 22:48:18 | 1:06:00 | 0:29:00 | 0:37:00 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rgw_snaps.yaml} | 2 | |
pass | 4418231 | 2019-10-17 19:26:18 | 2019-10-17 21:46:17 | 2019-10-18 00:40:19 | 2:54:02 | 2:42:55 | 0:11:07 | smithi | master | centos | 7.6 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4418232 | 2019-10-17 19:26:19 | 2019-10-17 21:49:01 | 2019-10-18 09:23:13 | 11:34:12 | 1:21:12 | 10:13:00 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
dead | 4418233 | 2019-10-17 19:26:20 | 2019-10-17 21:55:14 | 2019-10-18 09:57:41 | 12:02:27 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
dead | 4418234 | 2019-10-17 19:26:21 | 2019-10-17 22:00:06 | 2019-10-18 10:02:33 | 12:02:27 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} | 2 | |||
fail | 4418235 | 2019-10-17 19:26:23 | 2019-10-17 22:03:00 | 2019-10-17 22:33:00 | 0:30:00 | 0:16:31 | 0:13:29 | smithi | master | rhel | 7.7 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4M_rand_rw.yaml} | 1 | |
pass | 4418236 | 2019-10-17 19:26:24 | 2019-10-17 22:09:46 | 2019-10-17 22:45:45 | 0:35:59 | 0:13:05 | 0:22:54 | smithi | master | centos | 7.6 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
dead | 4418237 | 2019-10-17 19:26:25 | 2019-10-17 22:23:02 | 2019-10-18 10:25:28 | 12:02:26 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |||
dead | 4418238 | 2019-10-17 19:26:26 | 2019-10-17 22:23:25 | 2019-10-18 10:25:48 | 12:02:23 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |||
dead | 4418239 | 2019-10-17 19:26:27 | 2019-10-17 22:23:49 | 2019-10-18 10:26:16 | 12:02:27 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |||
dead | 4418240 | 2019-10-17 19:26:28 | 2019-10-17 22:24:23 | 2019-10-18 10:26:49 | 12:02:26 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |||
pass | 4418241 | 2019-10-17 19:26:29 | 2019-10-17 22:33:18 | 2019-10-17 22:55:18 | 0:22:00 | 0:13:33 | 0:08:27 | smithi | master | rhel | 7.7 | rados/multimon/{clusters/3.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
fail | 4418242 | 2019-10-17 19:26:30 | 2019-10-17 22:34:13 | 2019-10-17 23:26:13 | 0:52:00 | 0:31:10 | 0:20:50 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 4418243 | 2019-10-17 19:26:31 | 2019-10-17 22:37:26 | 2019-10-17 23:07:25 | 0:29:59 | 0:14:00 | 0:15:59 | smithi | master | rhel | 7.7 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4418244 | 2019-10-17 19:26:32 | 2019-10-17 22:37:52 | 2019-10-18 02:23:55 | 3:46:03 | 3:27:04 | 0:18:59 | smithi | master | rhel | 7.7 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi200 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 4418245 | 2019-10-17 19:26:33 | 2019-10-17 22:46:06 | 2019-10-18 10:48:29 | 12:02:23 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
fail | 4418246 | 2019-10-17 19:26:34 | 2019-10-17 22:48:36 | 2019-10-18 02:14:38 | 3:26:02 | 3:13:37 | 0:12:25 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-bind.sh) on smithi099 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-bind.sh' |
||||||||||||||
dead | 4418247 | 2019-10-17 19:26:35 | 2019-10-17 22:51:45 | 2019-10-18 10:54:11 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
fail | 4418248 | 2019-10-17 19:26:36 | 2019-10-17 22:55:36 | 2019-10-18 08:13:45 | 9:18:09 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 2 | |||
Failure Reason:
machine smithi170.front.sepia.ceph.com is locked by scheduled_nhm@teuthology, not scheduled_sjust@teuthology |
||||||||||||||
dead | 4418249 | 2019-10-17 19:26:37 | 2019-10-17 23:00:16 | 2019-10-18 11:02:38 | 12:02:22 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/scrub_test.yaml} | 2 | |||
dead | 4418250 | 2019-10-17 19:26:38 | 2019-10-17 23:02:37 | 2019-10-18 11:05:00 | 12:02:23 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
fail | 4418251 | 2019-10-17 19:26:39 | 2019-10-17 23:07:43 | 2019-10-17 23:39:42 | 0:31:59 | 0:19:14 | 0:12:45 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
"2019-10-17T23:28:15.079688+0000 mon.b (mon.0) 152 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418252 | 2019-10-17 19:26:40 | 2019-10-17 23:07:43 | 2019-10-17 23:31:43 | 0:24:00 | 0:13:52 | 0:10:08 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4M_rand_write.yaml} | 1 | |
fail | 4418253 | 2019-10-17 19:26:41 | 2019-10-17 23:08:22 | 2019-10-18 02:40:26 | 3:32:04 | 3:21:42 | 0:10:22 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/radostool.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_rados_tool.sh) on smithi104 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_rados_tool.sh' |
||||||||||||||
dead | 4418254 | 2019-10-17 19:26:42 | 2019-10-17 23:14:07 | 2019-10-18 11:18:35 | 12:04:28 | 2:25:12 | 9:39:16 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=22664) |
||||||||||||||
dead | 4418255 | 2019-10-17 19:26:42 | 2019-10-17 23:15:24 | 2019-10-18 11:17:47 | 12:02:23 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
dead | 4418256 | 2019-10-17 19:26:43 | 2019-10-17 23:16:45 | 2019-10-18 11:19:11 | 12:02:26 | smithi | master | centos | 7.6 | rados/singleton/{all/random-eio.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |||
pass | 4418257 | 2019-10-17 19:26:44 | 2019-10-17 23:19:31 | 2019-10-17 23:37:31 | 0:18:00 | 0:12:20 | 0:05:40 | smithi | master | rhel | 7.7 | rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4418258 | 2019-10-17 19:26:45 | 2019-10-17 23:22:08 | 2019-10-18 11:24:30 | 12:02:22 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/force-sync-many.yaml workloads/rados_5925.yaml} | 2 | |||
fail | 4418259 | 2019-10-17 19:26:46 | 2019-10-17 23:23:49 | 2019-10-17 23:59:49 | 0:36:00 | 0:25:29 | 0:10:31 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
"2019-10-17T23:42:17.396047+0000 mon.a (mon.0) 131 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418260 | 2019-10-17 19:26:47 | 2019-10-17 23:26:31 | 2019-10-17 23:50:31 | 0:24:00 | 0:16:58 | 0:07:02 | smithi | master | rhel | 7.7 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4K_rand_read.yaml} | 1 | |
dead | 4418261 | 2019-10-17 19:26:48 | 2019-10-17 23:26:31 | 2019-10-18 11:28:58 | 12:02:27 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
fail | 4418262 | 2019-10-17 19:26:49 | 2019-10-17 23:27:03 | 2019-10-18 00:21:03 | 0:54:00 | 0:44:05 | 0:09:55 | smithi | master | centos | 7.6 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
reached maximum tries (200) after waiting for 1200 seconds |
||||||||||||||
fail | 4418263 | 2019-10-17 19:26:50 | 2019-10-17 23:27:49 | 2019-10-18 03:11:52 | 3:44:03 | 3:22:28 | 0:21:35 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi094 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
dead | 4418264 | 2019-10-17 19:26:51 | 2019-10-17 23:32:03 | 2019-10-18 11:34:29 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |||
dead | 4418265 | 2019-10-17 19:26:52 | 2019-10-17 23:37:48 | 2019-10-18 11:40:15 | 12:02:27 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
fail | 4418266 | 2019-10-17 19:26:53 | 2019-10-17 23:40:01 | 2019-10-18 00:52:01 | 1:12:00 | 0:49:07 | 0:22:53 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |
Failure Reason:
"2019-10-18T00:14:44.535123+0000 mon.a (mon.0) 103 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 4418267 | 2019-10-17 19:26:55 | 2019-10-17 23:43:49 | 2019-10-18 11:46:17 | 12:02:28 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
dead | 4418268 | 2019-10-17 19:26:55 | 2019-10-17 23:47:30 | 2019-10-18 11:49:53 | 12:02:23 | smithi | master | centos | 7.6 | rados/singleton/{all/recovery-preemption.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
dead | 4418269 | 2019-10-17 19:26:57 | 2019-10-17 23:47:30 | 2019-10-18 11:49:58 | 12:02:28 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |||
dead | 4418270 | 2019-10-17 19:26:58 | 2019-10-17 23:48:13 | 2019-10-18 11:50:39 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |||
dead | 4418271 | 2019-10-17 19:26:59 | 2019-10-17 23:50:48 | 2019-10-18 11:53:15 | 12:02:27 | 3:57:33 | 8:04:54 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=24566) |
||||||||||||||
pass | 4418272 | 2019-10-17 19:27:00 | 2019-10-18 00:00:09 | 2019-10-18 00:32:08 | 0:31:59 | 0:23:47 | 0:08:12 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4418273 | 2019-10-17 19:27:01 | 2019-10-18 00:04:14 | 2019-10-18 00:32:13 | 0:27:59 | 0:16:41 | 0:11:18 | smithi | master | rhel | 7.7 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
dead | 4418274 | 2019-10-17 19:27:02 | 2019-10-18 00:11:45 | 2019-10-18 12:14:07 | 12:02:22 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
dead | 4418275 | 2019-10-17 19:27:03 | 2019-10-18 00:12:41 | 2019-10-18 12:15:03 | 12:02:22 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |||
dead | 4418276 | 2019-10-17 19:27:04 | 2019-10-18 00:15:41 | 2019-10-18 12:18:09 | 12:02:28 | 10:10:11 | 1:52:17 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=6779) |
||||||||||||||
pass | 4418277 | 2019-10-17 19:27:05 | 2019-10-18 00:21:21 | 2019-10-18 01:11:21 | 0:50:00 | 0:13:14 | 0:36:46 | smithi | master | rhel | 7.7 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_with_skews.yaml} | 2 | |
dead | 4418278 | 2019-10-17 19:27:06 | 2019-10-18 00:23:01 | 2019-10-18 12:25:27 | 12:02:26 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |||||
pass | 4418279 | 2019-10-17 19:27:07 | 2019-10-18 00:32:10 | 2019-10-18 01:12:10 | 0:40:00 | 0:15:36 | 0:24:24 | smithi | master | rhel | 7.7 | rados/singleton/{all/test-crash.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4418280 | 2019-10-17 19:27:08 | 2019-10-18 00:32:14 | 2019-10-18 00:58:14 | 0:26:00 | 0:07:50 | 0:18:10 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
dead | 4418281 | 2019-10-17 19:27:09 | 2019-10-18 00:32:34 | 2019-10-18 12:34:59 | 12:02:25 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} | 1 | |||
dead | 4418282 | 2019-10-17 19:27:10 | 2019-10-18 00:40:36 | 2019-10-18 12:42:48 | 12:02:12 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | — | |||
fail | 4418283 | 2019-10-17 19:27:11 | 2019-10-18 00:50:24 | 2019-10-18 05:30:28 | 4:40:04 | 3:22:34 | 1:17:30 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_cls_all.yaml} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on smithi049 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 4418284 | 2019-10-17 19:27:11 | 2019-10-18 00:52:19 | 2019-10-18 02:06:19 | 1:14:00 | 0:45:18 | 0:28:42 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
"2019-10-18T01:29:53.725721+0000 mon.a (mon.0) 105 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418285 | 2019-10-17 19:27:13 | 2019-10-18 00:58:31 | 2019-10-18 01:26:32 | 0:28:01 | 0:13:52 | 0:14:09 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
dead | 4418286 | 2019-10-17 19:27:14 | 2019-10-18 01:05:01 | 2019-10-18 13:07:28 | 12:02:27 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
dead | 4418287 | 2019-10-17 19:27:15 | 2019-10-18 01:11:38 | 2019-10-18 13:16:05 | 12:04:27 | 10:12:18 | 1:52:09 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=6553) |
||||||||||||||
fail | 4418288 | 2019-10-17 19:27:16 | 2019-10-18 01:12:11 | 2019-10-18 04:46:14 | 3:34:03 | 3:24:04 | 0:09:59 | smithi | master | centos | 7.6 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi202 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 4418289 | 2019-10-17 19:27:17 | 2019-10-18 01:26:49 | 2019-10-18 10:50:58 | 9:24:09 | 3:25:47 | 5:58:22 | smithi | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/many.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi001 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
dead | 4418290 | 2019-10-17 19:27:18 | 2019-10-18 01:55:36 | 2019-10-18 08:13:42 | 6:18:06 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | — | |||
Failure Reason:
'ssh_keyscan smithi196.front.sepia.ceph.com' reached maximum tries (5) after waiting for 5 seconds |
||||||||||||||
dead | 4418291 | 2019-10-17 19:27:19 | 2019-10-18 01:58:12 | 2019-10-18 14:00:38 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
dead | 4418292 | 2019-10-17 19:27:20 | 2019-10-18 01:59:49 | 2019-10-18 14:02:11 | 12:02:22 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |||
dead | 4418293 | 2019-10-17 19:27:21 | 2019-10-18 02:06:37 | 2019-10-18 14:08:59 | 12:02:22 | smithi | master | rhel | 7.7 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |||
dead | 4418294 | 2019-10-17 19:27:22 | 2019-10-18 02:14:56 | 2019-10-18 14:17:22 | 12:02:26 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
fail | 4418295 | 2019-10-17 19:27:23 | 2019-10-18 02:24:12 | 2019-10-18 02:48:11 | 0:23:59 | 0:13:51 | 0:10:08 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_seq_read.yaml} | 1 | |
dead | 4418296 | 2019-10-17 19:27:24 | 2019-10-18 14:25:17 | 2019-10-19 02:27:48 | 12:02:31 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
fail | 4418297 | 2019-10-17 19:27:25 | 2019-10-18 14:27:02 | 2019-10-18 18:05:05 | 3:38:03 | 3:25:36 | 0:12:27 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_python.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi101 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
fail | 4418298 | 2019-10-17 19:27:26 | 2019-10-18 14:27:11 | 2019-10-18 18:03:14 | 3:36:03 | 3:27:38 | 0:08:25 | smithi | master | rhel | 7.7 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi089 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
dead | 4418299 | 2019-10-17 19:27:27 | 2019-10-18 14:28:45 | 2019-10-19 02:31:07 | 12:02:22 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/osd_stale_reads.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
dead | 4418300 | 2019-10-17 19:27:28 | 2019-10-18 14:28:57 | 2019-10-19 02:31:22 | 12:02:25 | smithi | master | rhel | 7.7 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |||
dead | 4418301 | 2019-10-17 19:27:29 | 2019-10-18 14:30:37 | 2019-10-19 02:33:00 | 12:02:23 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |||
dead | 4418302 | 2019-10-17 19:27:30 | 2019-10-18 14:31:52 | 2019-10-19 02:34:19 | 12:02:27 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |||
fail | 4418303 | 2019-10-17 19:27:31 | 2019-10-18 14:32:12 | 2019-10-18 15:00:12 | 0:28:00 | 0:16:09 | 0:11:51 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/prometheus.yaml} | 2 | |
Failure Reason:
"2019-10-18T14:49:24.921768+0000 mon.a (mon.0) 106 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 4418304 | 2019-10-17 19:27:32 | 2019-10-18 14:33:36 | 2019-10-19 02:36:02 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
pass | 4418305 | 2019-10-17 19:27:33 | 2019-10-18 14:35:04 | 2019-10-18 15:13:03 | 0:37:59 | 0:29:05 | 0:08:54 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
dead | 4418306 | 2019-10-17 19:27:34 | 2019-10-18 14:35:04 | 2019-10-19 02:37:30 | 12:02:26 | smithi | master | rhel | 7.7 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |||
fail | 4418307 | 2019-10-17 19:27:35 | 2019-10-18 14:36:58 | 2019-10-18 15:00:57 | 0:23:59 | 0:14:05 | 0:09:54 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
dead | 4418308 | 2019-10-17 19:27:36 | 2019-10-18 14:40:00 | 2019-10-19 02:42:27 | 12:02:27 | 10:46:59 | 1:15:28 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=20930) |
||||||||||||||
dead | 4418309 | 2019-10-17 19:27:37 | 2019-10-18 14:40:45 | 2019-10-19 02:43:11 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |||
dead | 4418310 | 2019-10-17 19:27:38 | 2019-10-18 14:42:08 | 2019-10-19 02:44:36 | 12:02:28 | smithi | master | centos | 7.6 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |||
pass | 4418311 | 2019-10-17 19:27:39 | 2019-10-18 14:43:36 | 2019-10-18 15:09:35 | 0:25:59 | 0:11:09 | 0:14:50 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
dead | 4418312 | 2019-10-17 19:27:40 | 2019-10-18 14:43:36 | 2019-10-19 02:46:02 | 12:02:26 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
fail | 4418313 | 2019-10-17 19:27:41 | 2019-10-18 14:47:15 | 2019-10-18 18:29:18 | 3:42:03 | 3:25:16 | 0:16:47 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_stress_watch.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/stress_watch.sh) on smithi079 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/stress_watch.sh' |
||||||||||||||
pass | 4418314 | 2019-10-17 19:27:42 | 2019-10-18 14:50:20 | 2019-10-18 15:44:20 | 0:54:00 | 0:21:38 | 0:32:22 | smithi | master | rhel | 7.7 | rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_recovery.yaml} | 3 | |
fail | 4418315 | 2019-10-17 19:27:43 | 2019-10-18 14:50:36 | 2019-10-18 18:42:39 | 3:52:03 | 3:39:55 | 0:12:08 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on smithi005 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
dead | 4418316 | 2019-10-17 19:27:44 | 2019-10-18 14:51:22 | 2019-10-19 02:53:45 | 12:02:23 | 11:51:09 | 0:11:14 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=3153) |
||||||||||||||
dead | 4418317 | 2019-10-17 19:27:45 | 2019-10-18 14:51:22 | 2019-10-19 02:53:48 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
dead | 4418318 | 2019-10-17 19:27:46 | 2019-10-18 14:53:33 | 2019-10-19 02:55:59 | 12:02:26 | smithi | master | rhel | 7.7 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |||
fail | 4418319 | 2019-10-17 19:27:47 | 2019-10-18 14:53:33 | 2019-10-18 15:33:33 | 0:40:00 | 0:15:43 | 0:24:17 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
Failure Reason:
"2019-10-18T15:21:53.594472+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418320 | 2019-10-17 19:27:48 | 2019-10-18 14:55:42 | 2019-10-18 15:19:41 | 0:23:59 | 0:11:39 | 0:12:20 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
dead | 4418321 | 2019-10-17 19:27:49 | 2019-10-18 15:00:30 | 2019-10-19 03:02:56 | 12:02:26 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |||
fail | 4418322 | 2019-10-17 19:27:50 | 2019-10-18 15:00:58 | 2019-10-18 18:45:01 | 3:44:03 | 3:22:44 | 0:21:19 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi058 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 4418323 | 2019-10-17 19:27:51 | 2019-10-18 15:03:27 | 2019-10-18 16:17:28 | 1:14:01 | 1:03:04 | 0:10:57 | smithi | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/one.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
dead | 4418324 | 2019-10-17 19:27:51 | 2019-10-18 15:03:28 | 2019-10-19 03:05:53 | 12:02:25 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
pass | 4418325 | 2019-10-17 19:27:52 | 2019-10-18 15:03:28 | 2019-10-18 15:23:27 | 0:19:59 | 0:10:22 | 0:09:37 | smithi | master | centos | 7.6 | rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
dead | 4418326 | 2019-10-17 19:27:53 | 2019-10-18 15:03:50 | 2019-10-19 03:06:12 | 12:02:22 | smithi | master | rhel | 7.7 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
dead | 4418327 | 2019-10-17 19:27:54 | 2019-10-18 15:05:31 | 2019-10-19 03:10:00 | 12:04:29 | 4:50:06 | 7:14:23 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | — | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
dead | 4418328 | 2019-10-17 19:27:55 | 2019-10-18 15:06:53 | 2019-10-19 03:09:16 | 12:02:23 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
dead | 4418329 | 2019-10-17 19:27:56 | 2019-10-18 15:06:59 | 2019-10-19 03:09:23 | 12:02:24 | smithi | master | centos | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
dead | 4418330 | 2019-10-17 19:27:57 | 2019-10-18 15:09:44 | 2019-10-18 16:09:44 | 1:00:00 | 0:37:41 | 0:22:19 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
SSH connection to smithi094 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_0' |
||||||||||||||
dead | 4418331 | 2019-10-17 19:27:58 | 2019-10-18 15:10:30 | 2019-10-19 03:14:57 | 12:04:27 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |||
fail | 4418332 | 2019-10-17 19:27:59 | 2019-10-18 15:12:49 | 2019-10-18 18:46:51 | 3:34:02 | 3:27:16 | 0:06:46 | smithi | master | rhel | 7.7 | rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_dedup_tool.sh) on smithi096 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0009d4765d23f6e6a0d55d83c1e9f63c45550d5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh' |
||||||||||||||
fail | 4418333 | 2019-10-17 19:28:01 | 2019-10-18 15:12:49 | 2019-10-18 15:40:48 | 0:27:59 | 0:11:40 | 0:16:19 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_fio.yaml} | 1 | |
dead | 4418334 | 2019-10-17 19:28:02 | 2019-10-18 15:13:05 | 2019-10-19 03:15:31 | 12:02:26 | 11:32:19 | 0:30:07 | smithi | master | rhel | 7.7 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_striper.yaml} | — | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=4174) |
||||||||||||||
dead | 4418335 | 2019-10-17 19:28:03 | 2019-10-18 15:17:32 | 2019-10-19 03:19:54 | 12:02:22 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |||
fail | 4418336 | 2019-10-17 19:28:04 | 2019-10-18 15:19:59 | 2019-10-18 16:09:58 | 0:49:59 | 0:34:40 | 0:15:19 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
"2019-10-18T15:44:17.943686+0000 mon.a (mon.0) 103 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
dead | 4418337 | 2019-10-17 19:28:05 | 2019-10-18 15:21:34 | 2019-10-19 03:23:56 | 12:02:22 | smithi | master | rhel | 7.7 | rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |||
dead | 4418338 | 2019-10-17 19:28:06 | 2019-10-18 15:23:45 | 2019-10-19 03:26:07 | 12:02:22 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |||
fail | 4418339 | 2019-10-17 19:28:07 | 2019-10-18 15:25:59 | 2019-10-18 15:59:58 | 0:33:59 | 0:27:26 | 0:06:33 | smithi | master | rhel | 7.7 | rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-10-18T15:43:31.752811+0000 mon.a (mon.0) 130 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4418340 | 2019-10-17 19:28:08 | 2019-10-18 15:33:50 | 2019-10-18 15:59:49 | 0:25:59 | 0:17:00 | 0:08:59 | smithi | master | rhel | 7.7 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/sample_radosbench.yaml} | 1 | |
dead | 4418341 | 2019-10-17 19:28:09 | 2019-10-18 15:35:21 | 2019-10-19 03:37:43 | 12:02:22 | smithi | master | centos | 7.6 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
dead | 4418342 | 2019-10-17 19:28:10 | 2019-10-18 15:35:22 | 2019-10-19 03:37:43 | 12:02:21 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 |