User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2019-10-06 17:25:20 | 2019-10-06 17:28:22 | 2019-10-07 10:40:57 | 17:12:35 | rados | wip-sage-testing-2019-10-06-0906 | smithi | f98fc15 | 119 | 93 | 10 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 4364448 | 2019-10-06 17:25:37 | 2019-10-06 17:26:40 | 2019-10-06 20:02:42 | 2:36:02 | 2:11:13 | 0:24:49 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
pass | 4364449 | 2019-10-06 17:25:38 | 2019-10-06 17:27:10 | 2019-10-06 22:13:14 | 4:46:04 | 0:29:00 | 4:17:04 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
pass | 4364450 | 2019-10-06 17:25:39 | 2019-10-06 17:28:22 | 2019-10-06 17:48:21 | 0:19:59 | 0:11:48 | 0:08:11 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 4364451 | 2019-10-06 17:25:40 | 2019-10-06 17:28:22 | 2019-10-06 19:12:23 | 1:44:01 | 0:49:14 | 0:54:47 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:35:10.338713+0000 mon.a (mon.0) 15 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364452 | 2019-10-06 17:25:41 | 2019-10-06 17:29:36 | 2019-10-06 17:53:35 | 0:23:59 | 0:15:34 | 0:08:25 | smithi | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/prometheus.yaml} | 2 | |
pass | 4364453 | 2019-10-06 17:25:42 | 2019-10-06 17:30:00 | 2019-10-06 17:56:00 | 0:26:00 | 0:19:46 | 0:06:14 | smithi | master | rhel | 7.6 | rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4364454 | 2019-10-06 17:25:43 | 2019-10-06 17:32:23 | 2019-10-06 19:44:24 | 2:12:01 | 0:59:13 | 1:12:48 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:49:29.149537+0000 mon.b (mon.1) 13 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364455 | 2019-10-06 17:25:44 | 2019-10-06 17:34:26 | 2019-10-06 17:52:25 | 0:17:59 | 0:12:15 | 0:05:44 | smithi | master | rhel | 7.6 | rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4364456 | 2019-10-06 17:25:45 | 2019-10-06 17:38:57 | 2019-10-06 18:26:56 | 0:47:59 | 0:28:36 | 0:19:23 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
fail | 4364457 | 2019-10-06 17:25:46 | 2019-10-06 17:42:43 | 2019-10-06 19:28:43 | 1:46:00 | 0:32:49 | 1:13:11 | smithi | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_python.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
pass | 4364458 | 2019-10-06 17:25:47 | 2019-10-06 17:44:22 | 2019-10-06 18:02:21 | 0:17:59 | 0:08:38 | 0:09:21 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364459 | 2019-10-06 17:25:48 | 2019-10-06 17:44:22 | 2019-10-06 18:06:22 | 0:22:00 | 0:10:16 | 0:11:44 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
pass | 4364460 | 2019-10-06 17:25:49 | 2019-10-06 17:47:58 | 2019-10-06 18:07:58 | 0:20:00 | 0:11:22 | 0:08:38 | smithi | master | centos | 7.6 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4364461 | 2019-10-06 17:25:50 | 2019-10-06 17:48:23 | 2019-10-06 18:28:22 | 0:39:59 | 0:25:00 | 0:14:59 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Command failed on smithi190 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool rm unique_pool_1 unique_pool_1 --yes-i-really-really-mean-it' |
||||||||||||||
pass | 4364462 | 2019-10-06 17:25:51 | 2019-10-06 17:48:23 | 2019-10-06 20:38:25 | 2:50:02 | 2:39:15 | 0:10:47 | smithi | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/force-sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
fail | 4364463 | 2019-10-06 17:25:52 | 2019-10-06 17:50:15 | 2019-10-06 19:58:16 | 2:08:01 | 0:15:55 | 1:52:06 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
Failure Reason:
Command failed on smithi095 with status 6: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_osd_down_out_interval=0" |
||||||||||||||
fail | 4364464 | 2019-10-06 17:25:53 | 2019-10-06 17:51:31 | 2019-10-06 18:41:31 | 0:50:00 | 0:37:40 | 0:12:20 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:14:51.236191+0000 mon.a (mon.1) 14 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364465 | 2019-10-06 17:25:53 | 2019-10-06 17:52:27 | 2019-10-06 18:16:26 | 0:23:59 | 0:13:04 | 0:10:55 | smithi | master | centos | 7.6 | rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4364466 | 2019-10-06 17:25:54 | 2019-10-06 17:53:51 | 2019-10-06 18:53:51 | 1:00:00 | 0:47:26 | 0:12:34 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:15:13.451345+0000 mon.b (mon.1) 9 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
dead | 4364467 | 2019-10-06 17:25:55 | 2019-10-06 17:54:14 | 2019-10-07 05:58:39 | 12:04:25 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |||
fail | 4364468 | 2019-10-06 17:25:56 | 2019-10-06 17:56:15 | 2019-10-06 18:40:14 | 0:43:59 | 0:38:15 | 0:05:44 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:14:09.283179+0000 mon.b (mon.1) 35 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364469 | 2019-10-06 17:25:57 | 2019-10-06 17:56:15 | 2019-10-06 21:08:17 | 3:12:02 | 0:56:21 | 2:15:41 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Command failed on smithi038 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config rm mgr mgr_debug_aggressive_pg_num_changes' |
||||||||||||||
pass | 4364470 | 2019-10-06 17:25:58 | 2019-10-06 17:57:32 | 2019-10-06 18:25:31 | 0:27:59 | 0:11:13 | 0:16:46 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
pass | 4364471 | 2019-10-06 17:25:59 | 2019-10-06 17:58:01 | 2019-10-06 19:50:02 | 1:52:01 | 1:34:04 | 0:17:57 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
dead | 4364472 | 2019-10-06 17:26:00 | 2019-10-06 17:58:14 | 2019-10-07 06:00:36 | 12:02:22 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
pass | 4364473 | 2019-10-06 17:26:01 | 2019-10-06 17:59:51 | 2019-10-06 18:27:50 | 0:27:59 | 0:19:16 | 0:08:43 | smithi | master | rhel | 7.6 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4364474 | 2019-10-06 17:26:02 | 2019-10-06 18:01:59 | 2019-10-06 18:45:58 | 0:43:59 | 0:25:18 | 0:18:41 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_stress_watch.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:24:27.438567+0000 mon.a (mon.1) 11 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364475 | 2019-10-06 17:26:03 | 2019-10-06 18:02:23 | 2019-10-06 18:48:23 | 0:46:00 | 0:16:23 | 0:29:37 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
fail | 4364476 | 2019-10-06 17:26:04 | 2019-10-06 18:04:04 | 2019-10-06 18:56:04 | 0:52:00 | 0:41:24 | 0:10:36 | smithi | master | centos | 7.6 | rados/singleton/{all/mon-config.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
"2019-10-06T18:21:56.375421+0000 mon.b (mon.1) 17 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364477 | 2019-10-06 17:26:04 | 2019-10-06 18:06:37 | 2019-10-06 18:16:36 | 0:09:59 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |||
Failure Reason:
Command failed on smithi046 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
pass | 4364478 | 2019-10-06 17:26:05 | 2019-10-06 18:08:13 | 2019-10-06 18:30:12 | 0:21:59 | 0:09:35 | 0:12:24 | smithi | master | centos | 7.6 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4364479 | 2019-10-06 17:26:06 | 2019-10-06 18:10:05 | 2019-10-06 21:24:07 | 3:14:02 | 2:49:17 | 0:24:45 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} | 1 | |
pass | 4364480 | 2019-10-06 17:26:07 | 2019-10-06 18:12:05 | 2019-10-06 18:36:04 | 0:23:59 | 0:15:08 | 0:08:51 | smithi | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/sample_fio.yaml} | 1 | |
fail | 4364481 | 2019-10-06 17:26:08 | 2019-10-06 18:12:05 | 2019-10-06 19:10:04 | 0:57:59 | 0:37:18 | 0:20:41 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:41:00.995203+0000 mon.b (mon.1) 23 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364482 | 2019-10-06 17:26:09 | 2019-10-06 18:16:45 | 2019-10-06 19:28:45 | 1:12:00 | 0:30:29 | 0:41:31 | smithi | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} | 2 | |
fail | 4364483 | 2019-10-06 17:26:10 | 2019-10-06 18:16:46 | 2019-10-06 18:54:45 | 0:37:59 | 0:18:08 | 0:19:51 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2019-10-06T18:42:15.297069+0000 mon.b (mon.0) 56 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364484 | 2019-10-06 17:26:11 | 2019-10-06 18:20:36 | 2019-10-06 19:00:35 | 0:39:59 | 0:28:29 | 0:11:30 | smithi | master | rhel | 7.6 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4364485 | 2019-10-06 17:26:12 | 2019-10-06 18:22:34 | 2019-10-06 20:54:35 | 2:32:01 | 0:16:31 | 2:15:30 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
fail | 4364486 | 2019-10-06 17:26:13 | 2019-10-06 18:25:10 | 2019-10-06 18:53:09 | 0:27:59 | 0:18:10 | 0:09:49 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/osd-backfill.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-10-06T18:40:28.364258+0000 mon.a (mon.0) 146 : cluster [WRN] Health check failed: 17 slow ops, oldest one blocked for 52 sec, mon.c has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
fail | 4364487 | 2019-10-06 17:26:14 | 2019-10-06 18:25:35 | 2019-10-06 19:27:35 | 1:02:00 | 0:53:38 | 0:08:22 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:44:36.500447+0000 mon.a (mon.1) 11 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364488 | 2019-10-06 17:26:15 | 2019-10-06 18:27:12 | 2019-10-06 19:01:12 | 0:34:00 | 0:11:26 | 0:22:34 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} | 2 | |
pass | 4364489 | 2019-10-06 17:26:16 | 2019-10-06 18:27:52 | 2019-10-06 18:49:51 | 0:21:59 | 0:08:40 | 0:13:19 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_striper.yaml} | 2 | |
pass | 4364490 | 2019-10-06 17:26:17 | 2019-10-06 18:28:01 | 2019-10-06 19:26:01 | 0:58:00 | 0:23:20 | 0:34:40 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
pass | 4364491 | 2019-10-06 17:26:18 | 2019-10-06 18:28:37 | 2019-10-06 18:52:37 | 0:24:00 | 0:14:17 | 0:09:43 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364492 | 2019-10-06 17:26:19 | 2019-10-06 18:30:35 | 2019-10-06 18:52:34 | 0:21:59 | 0:12:48 | 0:09:11 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/sample_radosbench.yaml} | 1 | |
pass | 4364493 | 2019-10-06 17:26:19 | 2019-10-06 18:31:54 | 2019-10-06 20:15:55 | 1:44:01 | 0:33:12 | 1:10:49 | smithi | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
fail | 4364494 | 2019-10-06 17:26:20 | 2019-10-06 18:32:16 | 2019-10-06 19:02:15 | 0:29:59 | 0:16:31 | 0:13:28 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2019-10-06T18:53:11.803709+0000 mon.a (mon.1) 13 : cluster [WRN] Health check update: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364495 | 2019-10-06 17:26:21 | 2019-10-06 18:33:58 | 2019-10-06 19:37:58 | 1:04:00 | 0:43:13 | 0:20:47 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
"2019-10-06T19:03:18.433973+0000 mon.b (mon.0) 21 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364496 | 2019-10-06 17:26:22 | 2019-10-06 18:35:18 | 2019-10-06 19:07:18 | 0:32:00 | 0:21:04 | 0:10:56 | smithi | master | centos | 7.6 | rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
"2019-10-06T18:57:27.560624+0000 mon.a (mon.0) 154 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364497 | 2019-10-06 17:26:23 | 2019-10-06 18:36:05 | 2019-10-06 19:50:05 | 1:14:00 | 1:03:22 | 0:10:38 | smithi | master | rhel | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 4364498 | 2019-10-06 17:26:24 | 2019-10-06 18:36:07 | 2019-10-06 19:10:07 | 0:34:00 | 0:23:25 | 0:10:35 | smithi | master | rhel | 7.6 | rados/singleton-nomsgr/{all/osd_stale_reads.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4364499 | 2019-10-06 17:26:25 | 2019-10-06 18:38:47 | 2019-10-06 20:04:47 | 1:26:00 | 1:00:33 | 0:25:27 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
"2019-10-06T19:11:59.381706+0000 mon.a (mon.1) 67 : cluster [WRN] Health check update: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364500 | 2019-10-06 17:26:26 | 2019-10-06 18:40:15 | 2019-10-06 20:32:16 | 1:52:01 | 0:58:17 | 0:53:44 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
Failure Reason:
"2019-10-06T19:42:10.850812+0000 mon.a (mon.1) 11 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364501 | 2019-10-06 17:26:27 | 2019-10-06 18:40:17 | 2019-10-06 20:44:18 | 2:04:01 | 0:33:50 | 1:30:11 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
fail | 4364502 | 2019-10-06 17:26:28 | 2019-10-06 18:41:46 | 2019-10-06 22:03:48 | 3:22:02 | 2:30:18 | 0:51:44 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2019-10-06T19:43:57.862715+0000 mon.a (mon.1) 9 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364503 | 2019-10-06 17:26:29 | 2019-10-06 18:46:14 | 2019-10-06 19:08:13 | 0:21:59 | 0:11:44 | 0:10:15 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-10-06T18:59:43.053480+0000 mon.a (mon.0) 10 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364504 | 2019-10-06 17:26:30 | 2019-10-06 18:48:39 | 2019-10-06 19:26:38 | 0:37:59 | 0:28:31 | 0:09:28 | smithi | master | rhel | 7.6 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4364505 | 2019-10-06 17:26:31 | 2019-10-06 18:50:06 | 2019-10-06 19:16:05 | 0:25:59 | 0:12:49 | 0:13:10 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/crash.yaml} | 2 | |
fail | 4364506 | 2019-10-06 17:26:32 | 2019-10-06 18:52:50 | 2019-10-06 19:26:49 | 0:33:59 | 0:20:12 | 0:13:47 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on smithi063 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
fail | 4364507 | 2019-10-06 17:26:33 | 2019-10-06 18:52:50 | 2019-10-06 21:50:52 | 2:58:02 | 0:40:31 | 2:17:31 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
"2019-10-06T21:22:45.993893+0000 mon.a (mon.1) 29 : cluster [WRN] Health check update: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364508 | 2019-10-06 17:26:33 | 2019-10-06 18:53:10 | 2019-10-06 19:59:10 | 1:06:00 | 0:47:00 | 0:19:00 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_workunit_loadgen_big.yaml} | 2 | |
Failure Reason:
"2019-10-06T19:20:20.477973+0000 mon.b (mon.0) 28 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364509 | 2019-10-06 17:26:34 | 2019-10-06 18:53:48 | 2019-10-06 19:19:48 | 0:26:00 | 0:14:25 | 0:11:35 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
Failure Reason:
Command failed on smithi101 with status 6: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_osd_down_out_interval=0" |
||||||||||||||
pass | 4364510 | 2019-10-06 17:26:35 | 2019-10-06 18:54:06 | 2019-10-06 19:24:05 | 0:29:59 | 0:20:43 | 0:09:16 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
fail | 4364511 | 2019-10-06 17:26:36 | 2019-10-06 18:54:46 | 2019-10-06 19:34:46 | 0:40:00 | 0:13:59 | 0:26:01 | smithi | master | centos | 7.6 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
Failure Reason:
"2019-10-06T19:32:07.115158+0000 mon.a (mon.0) 532 : cluster [ERR] Health check failed: Module 'pg_autoscaler' has failed: (1,) (MGR_MODULE_ERROR)" in cluster log |
||||||||||||||
pass | 4364512 | 2019-10-06 17:26:37 | 2019-10-06 18:56:19 | 2019-10-06 19:16:18 | 0:19:59 | 0:10:46 | 0:09:13 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4364513 | 2019-10-06 17:26:38 | 2019-10-06 18:58:28 | 2019-10-06 20:00:28 | 1:02:00 | 0:19:50 | 0:42:10 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
"2019-10-06T19:46:23.878346+0000 mon.b (mon.0) 19 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364514 | 2019-10-06 17:26:39 | 2019-10-06 18:59:52 | 2019-10-06 19:31:52 | 0:32:00 | 0:25:51 | 0:06:09 | smithi | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
pass | 4364515 | 2019-10-06 17:26:40 | 2019-10-06 19:00:37 | 2019-10-06 20:30:37 | 1:30:00 | 1:17:12 | 0:12:48 | smithi | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/scrub.yaml} | 1 | |
pass | 4364516 | 2019-10-06 17:26:41 | 2019-10-06 19:01:26 | 2019-10-06 19:23:26 | 0:22:00 | 0:07:59 | 0:14:01 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364517 | 2019-10-06 17:26:42 | 2019-10-06 19:02:16 | 2019-10-06 19:44:16 | 0:42:00 | 0:10:30 | 0:31:30 | smithi | master | centos | 7.6 | rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
pass | 4364518 | 2019-10-06 17:26:43 | 2019-10-06 19:02:41 | 2019-10-06 19:54:40 | 0:51:59 | 0:26:13 | 0:25:46 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
fail | 4364519 | 2019-10-06 17:26:44 | 2019-10-06 19:07:33 | 2019-10-06 20:33:33 | 1:26:00 | 0:32:08 | 0:53:52 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
"2019-10-06T20:10:43.975117+0000 mon.a (mon.1) 53 : cluster [WRN] Health check update: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364520 | 2019-10-06 17:26:45 | 2019-10-06 19:07:55 | 2019-10-06 20:39:56 | 1:32:01 | 0:15:24 | 1:16:37 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
fail | 4364521 | 2019-10-06 17:26:46 | 2019-10-06 19:08:15 | 2019-10-06 19:40:14 | 0:31:59 | 0:19:55 | 0:12:04 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_perf_counters_mds_get (tasks.mgr.dashboard.test_perf_counters.PerfCountersControllerTest) |
||||||||||||||
fail | 4364522 | 2019-10-06 17:26:47 | 2019-10-06 19:09:51 | 2019-10-06 19:57:50 | 0:47:59 | 0:35:10 | 0:12:49 | smithi | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
Failure Reason:
"2019-10-06T19:40:03.883925+0000 mon.b (mon.1) 182 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364523 | 2019-10-06 17:26:48 | 2019-10-06 19:09:51 | 2019-10-06 19:39:50 | 0:29:59 | 0:17:08 | 0:12:51 | smithi | master | rhel | 7.6 | rados/singleton/{all/radostool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4364524 | 2019-10-06 17:26:49 | 2019-10-06 19:10:06 | 2019-10-06 19:58:05 | 0:47:59 | 0:27:26 | 0:20:33 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi101 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool set unique_pool_0 min_size 2' |
||||||||||||||
pass | 4364525 | 2019-10-06 17:26:49 | 2019-10-06 19:10:08 | 2019-10-06 19:58:08 | 0:48:00 | 0:29:57 | 0:18:03 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4364526 | 2019-10-06 17:26:50 | 2019-10-06 19:12:39 | 2019-10-06 21:10:40 | 1:58:01 | 1:32:24 | 0:25:37 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/one.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Command failed on smithi146 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph quorum_status' |
||||||||||||||
fail | 4364527 | 2019-10-06 17:26:51 | 2019-10-06 19:16:20 | 2019-10-06 19:46:19 | 0:29:59 | 0:13:40 | 0:16:19 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Command failed on smithi137 with status 6: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_osd_down_out_interval=0" |
||||||||||||||
pass | 4364528 | 2019-10-06 17:26:52 | 2019-10-06 19:16:20 | 2019-10-06 19:42:19 | 0:25:59 | 0:15:04 | 0:10:55 | smithi | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
dead | 4364529 | 2019-10-06 17:26:53 | 2019-10-06 21:36:50 | 2019-10-07 09:39:12 | 12:02:22 | smithi | master | centos | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
fail | 4364530 | 2019-10-06 17:26:54 | 2019-10-06 21:36:51 | 2019-10-06 22:28:51 | 0:52:00 | 0:32:09 | 0:19:51 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
"2019-10-06T22:10:36.321223+0000 mon.b (mon.0) 69 : cluster [WRN] Health check update: 1/3 mons down, quorum c,a (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364531 | 2019-10-06 17:26:55 | 2019-10-06 21:36:53 | 2019-10-06 23:00:53 | 1:24:00 | 1:10:29 | 0:13:31 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
"2019-10-06T21:54:45.930889+0000 mon.b (mon.0) 17 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364532 | 2019-10-06 17:26:56 | 2019-10-06 21:37:46 | 2019-10-06 22:19:45 | 0:41:59 | 0:33:11 | 0:08:48 | smithi | master | rhel | 7.6 | rados/singleton/{all/random-eio.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
fail | 4364533 | 2019-10-06 17:26:57 | 2019-10-06 21:38:16 | 2019-10-06 23:14:17 | 1:36:01 | 1:28:31 | 0:07:30 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:01:10.343932+0000 mon.b (mon.1) 144 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364534 | 2019-10-06 17:26:58 | 2019-10-06 21:38:19 | 2019-10-06 22:22:19 | 0:44:00 | 0:34:35 | 0:09:25 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
reached maximum tries (200) after waiting for 1200 seconds |
||||||||||||||
dead | 4364535 | 2019-10-06 17:26:59 | 2019-10-06 21:38:49 | 2019-10-07 09:41:21 | 12:02:32 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |||
pass | 4364536 | 2019-10-06 17:27:00 | 2019-10-06 21:40:35 | 2019-10-06 22:18:35 | 0:38:00 | 0:24:27 | 0:13:33 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
fail | 4364537 | 2019-10-06 17:27:01 | 2019-10-06 21:40:52 | 2019-10-06 23:00:52 | 1:20:00 | 1:05:58 | 0:14:02 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:03:47.130212+0000 mon.b (mon.0) 22 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364538 | 2019-10-06 17:27:02 | 2019-10-06 21:40:58 | 2019-10-06 22:02:57 | 0:21:59 | 0:11:14 | 0:10:45 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4364539 | 2019-10-06 17:27:03 | 2019-10-06 21:41:34 | 2019-10-06 22:09:33 | 0:27:59 | 0:17:20 | 0:10:39 | smithi | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/failover.yaml} | 2 | |
pass | 4364540 | 2019-10-06 17:27:04 | 2019-10-06 21:41:57 | 2019-10-06 22:01:56 | 0:19:59 | 0:09:28 | 0:10:31 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
dead | 4364541 | 2019-10-06 17:27:05 | 2019-10-06 21:41:57 | 2019-10-07 09:44:20 | 12:02:23 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |||
fail | 4364542 | 2019-10-06 17:27:05 | 2019-10-06 21:44:18 | 2019-10-06 22:26:18 | 0:42:00 | 0:15:58 | 0:26:02 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
"2019-10-06T22:16:21.514725+0000 mon.c (mon.0) 29 : cluster [WRN] Health check failed: 1/3 mons down, quorum c,a (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364543 | 2019-10-06 17:27:06 | 2019-10-06 21:44:49 | 2019-10-06 22:24:48 | 0:39:59 | 0:29:57 | 0:10:02 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/recovery-preemption.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-10-06T21:59:26.693754+0000 mon.b (mon.1) 12 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364544 | 2019-10-06 17:27:07 | 2019-10-06 21:44:58 | 2019-10-06 22:20:58 | 0:36:00 | 0:24:47 | 0:11:13 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 4364545 | 2019-10-06 17:27:08 | 2019-10-06 21:45:11 | 2019-10-06 22:09:10 | 0:23:59 | 0:12:01 | 0:11:58 | smithi | master | centos | 7.6 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
fail | 4364546 | 2019-10-06 17:27:09 | 2019-10-06 21:45:17 | 2019-10-06 22:21:16 | 0:35:59 | 0:23:36 | 0:12:23 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:02:53.431007+0000 mon.b (mon.0) 16 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364547 | 2019-10-06 17:27:10 | 2019-10-06 21:46:22 | 2019-10-06 22:06:21 | 0:19:59 | 0:12:07 | 0:07:52 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4M_rand_read.yaml} | 1 | |
pass | 4364548 | 2019-10-06 17:27:11 | 2019-10-06 21:46:22 | 2019-10-06 22:04:22 | 0:18:00 | 0:08:15 | 0:09:45 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364549 | 2019-10-06 17:27:12 | 2019-10-06 21:46:22 | 2019-10-06 22:10:22 | 0:24:00 | 0:14:56 | 0:09:04 | smithi | master | rhel | 7.6 | rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4364550 | 2019-10-06 17:27:13 | 2019-10-06 21:46:32 | 2019-10-07 00:18:34 | 2:32:02 | 2:22:12 | 0:09:50 | smithi | master | centos | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi151 with status 11: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 4364551 | 2019-10-06 17:27:14 | 2019-10-06 21:46:33 | 2019-10-06 22:14:32 | 0:27:59 | 0:16:44 | 0:11:15 | smithi | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
pass | 4364552 | 2019-10-06 17:27:15 | 2019-10-06 21:46:38 | 2019-10-06 22:10:38 | 0:24:00 | 0:12:16 | 0:11:44 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364553 | 2019-10-06 17:27:16 | 2019-10-06 21:47:40 | 2019-10-06 22:09:39 | 0:21:59 | 0:11:11 | 0:10:48 | smithi | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/crush.yaml} | 1 | |
dead | 4364554 | 2019-10-06 17:27:17 | 2019-10-06 21:48:05 | 2019-10-07 09:50:27 | 12:02:22 | smithi | master | rhel | 7.6 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} | 4 | |||
pass | 4364555 | 2019-10-06 17:27:17 | 2019-10-06 21:48:39 | 2019-10-06 22:10:38 | 0:21:59 | 0:11:46 | 0:10:13 | smithi | master | centos | 7.6 | rados/singleton/{all/test-crash.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
dead | 4364556 | 2019-10-06 17:27:18 | 2019-10-06 21:50:22 | 2019-10-07 09:52:50 | 12:02:28 | 11:45:41 | 0:16:47 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
psutil.NoSuchProcess process no longer exists (pid=30615) |
||||||||||||||
fail | 4364557 | 2019-10-06 17:27:19 | 2019-10-06 21:50:42 | 2019-10-06 22:24:42 | 0:34:00 | 0:25:23 | 0:08:37 | smithi | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/readwrite.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:10:36.388989+0000 mon.a (mon.0) 35 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log |
||||||||||||||
fail | 4364558 | 2019-10-06 17:27:20 | 2019-10-06 21:50:42 | 2019-10-06 22:32:42 | 0:42:00 | 0:24:12 | 0:17:48 | smithi | master | rhel | 7.6 | rados/multimon/{clusters/21.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
Failure Reason:
failed to reach quorum size 21 before timeout expired |
||||||||||||||
fail | 4364559 | 2019-10-06 17:27:21 | 2019-10-06 21:50:53 | 2019-10-06 23:04:53 | 1:14:00 | 1:02:00 | 0:12:00 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2019-10-06T22:10:15.666270+0000 mon.b (mon.0) 297 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364560 | 2019-10-06 17:27:22 | 2019-10-06 21:51:01 | 2019-10-06 22:55:01 | 1:04:00 | 0:49:14 | 0:14:46 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:13:22.819051+0000 mon.b (mon.0) 86 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364561 | 2019-10-06 17:27:23 | 2019-10-06 21:51:08 | 2019-10-06 22:39:07 | 0:47:59 | 0:33:48 | 0:14:11 | smithi | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
Failure Reason:
Command failed on smithi161 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_rados_delete_pools_parallel' |
||||||||||||||
pass | 4364562 | 2019-10-06 17:27:24 | 2019-10-06 21:51:47 | 2019-10-06 22:13:46 | 0:21:59 | 0:10:47 | 0:11:12 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/insights.yaml} | 2 | |
pass | 4364563 | 2019-10-06 17:27:25 | 2019-10-06 21:52:32 | 2019-10-06 23:56:33 | 2:04:01 | 1:17:09 | 0:46:52 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
fail | 4364564 | 2019-10-06 17:27:26 | 2019-10-06 21:53:16 | 2019-10-06 23:45:17 | 1:52:01 | 1:26:44 | 0:25:17 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
Command failed on smithi002 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 4364565 | 2019-10-06 17:27:27 | 2019-10-06 21:54:33 | 2019-10-07 00:42:35 | 2:48:02 | 2:37:38 | 0:10:24 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Command failed on smithi177 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
pass | 4364566 | 2019-10-06 17:27:28 | 2019-10-06 21:54:34 | 2019-10-06 22:22:33 | 0:27:59 | 0:16:58 | 0:11:01 | smithi | master | centos | 7.6 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4364567 | 2019-10-06 17:27:29 | 2019-10-06 21:54:55 | 2019-10-06 22:36:55 | 0:42:00 | 0:29:45 | 0:12:15 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
pass | 4364568 | 2019-10-06 17:27:30 | 2019-10-06 21:56:36 | 2019-10-06 22:18:35 | 0:21:59 | 0:15:19 | 0:06:40 | smithi | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4M_rand_rw.yaml} | 1 | |
dead | 4364569 | 2019-10-06 17:27:31 | 2019-10-06 21:56:48 | 2019-10-07 09:59:12 | 12:02:24 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |||
pass | 4364570 | 2019-10-06 17:27:31 | 2019-10-06 21:56:53 | 2019-10-06 23:28:54 | 1:32:01 | 1:21:21 | 0:10:40 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 4364571 | 2019-10-06 17:27:32 | 2019-10-06 21:58:37 | 2019-10-06 22:18:36 | 0:19:59 | 0:08:15 | 0:11:44 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 4364572 | 2019-10-06 17:27:33 | 2019-10-06 21:58:38 | 2019-10-07 01:30:41 | 3:32:03 | 3:18:05 | 0:13:58 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi041 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 4364573 | 2019-10-06 17:27:34 | 2019-10-06 21:58:38 | 2019-10-06 22:56:38 | 0:58:00 | 0:39:12 | 0:18:48 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/repair_test.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:22:40.187821+0000 mon.b (mon.0) 40 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
dead | 4364574 | 2019-10-06 17:27:35 | 2019-10-06 21:59:28 | 2019-10-07 10:01:52 | 12:02:24 | smithi | master | rhel | 7.6 | rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |||
pass | 4364575 | 2019-10-06 17:27:36 | 2019-10-06 22:00:24 | 2019-10-06 22:36:23 | 0:35:59 | 0:29:23 | 0:06:36 | smithi | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/module_selftest.yaml} | 2 | |
pass | 4364576 | 2019-10-06 17:27:37 | 2019-10-06 22:00:47 | 2019-10-06 22:18:46 | 0:17:59 | 0:09:44 | 0:08:15 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_write.yaml} | 1 | |
fail | 4364577 | 2019-10-06 17:27:38 | 2019-10-06 22:01:48 | 2019-10-06 22:49:48 | 0:48:00 | 0:35:44 | 0:12:16 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:18:11.923990+0000 mon.b (mon.1) 22 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364578 | 2019-10-06 17:27:39 | 2019-10-06 22:01:57 | 2019-10-07 04:52:03 | 6:50:06 | 0:16:16 | 6:33:50 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
pass | 4364579 | 2019-10-06 17:27:40 | 2019-10-06 22:01:58 | 2019-10-06 22:23:57 | 0:21:59 | 0:12:49 | 0:09:10 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 4364580 | 2019-10-06 17:27:41 | 2019-10-06 22:02:25 | 2019-10-06 22:56:24 | 0:53:59 | 0:39:22 | 0:14:37 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
"2019-10-06T22:28:09.607616+0000 mon.b (mon.0) 16 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364581 | 2019-10-06 17:27:42 | 2019-10-06 22:02:26 | 2019-10-06 22:40:25 | 0:37:59 | 0:19:36 | 0:18:23 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 4364582 | 2019-10-06 17:27:43 | 2019-10-06 22:03:13 | 2019-10-06 22:23:12 | 0:19:59 | 0:08:45 | 0:11:14 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364583 | 2019-10-06 17:27:44 | 2019-10-06 22:03:50 | 2019-10-06 22:31:49 | 0:27:59 | 0:14:00 | 0:13:59 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
fail | 4364584 | 2019-10-06 17:27:45 | 2019-10-06 22:04:35 | 2019-10-07 00:00:35 | 1:56:00 | 1:47:00 | 0:09:00 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:25:16.219822+0000 mon.b (mon.0) 18 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364585 | 2019-10-06 17:27:45 | 2019-10-06 22:04:35 | 2019-10-06 22:44:34 | 0:39:59 | 0:23:39 | 0:16:20 | smithi | master | centos | 7.6 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
pass | 4364586 | 2019-10-06 17:27:46 | 2019-10-06 22:04:35 | 2019-10-06 22:26:34 | 0:21:59 | 0:12:00 | 0:09:59 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4K_rand_read.yaml} | 1 | |
fail | 4364587 | 2019-10-06 17:27:47 | 2019-10-06 22:06:38 | 2019-10-06 22:38:37 | 0:31:59 | 0:22:36 | 0:09:23 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:28:16.455119+0000 mon.a (mon.1) 50 : cluster [WRN] Health check update: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364588 | 2019-10-06 17:27:48 | 2019-10-06 22:06:39 | 2019-10-06 22:38:38 | 0:31:59 | 0:12:38 | 0:19:21 | smithi | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/sync.yaml workloads/rados_5925.yaml} | 2 | |
pass | 4364589 | 2019-10-06 17:27:49 | 2019-10-06 22:06:48 | 2019-10-06 22:44:48 | 0:38:00 | 0:20:35 | 0:17:25 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rgw_snaps.yaml} | 2 | |
pass | 4364590 | 2019-10-06 17:27:50 | 2019-10-06 22:08:43 | 2019-10-06 22:52:43 | 0:44:00 | 0:38:39 | 0:05:21 | smithi | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/erasure-code.yaml} | 1 | |
fail | 4364591 | 2019-10-06 17:27:51 | 2019-10-06 22:08:52 | 2019-10-06 23:30:52 | 1:22:00 | 0:35:59 | 0:46:01 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
"2019-10-06T23:06:17.268008+0000 mon.a (mon.0) 23 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364592 | 2019-10-06 17:27:52 | 2019-10-06 22:09:11 | 2019-10-06 22:47:11 | 0:38:00 | 0:27:44 | 0:10:16 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-async-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:24:49.217831+0000 mon.b (mon.1) 18 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364593 | 2019-10-06 17:27:53 | 2019-10-06 22:09:23 | 2019-10-06 23:41:23 | 1:32:00 | 1:17:53 | 0:14:07 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:32:23.547217+0000 mon.b (mon.1) 12 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364594 | 2019-10-06 17:27:54 | 2019-10-06 22:09:30 | 2019-10-07 00:39:32 | 2:30:02 | 2:22:58 | 0:07:04 | smithi | master | rhel | 7.6 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-10-06T22:26:42.974842+0000 mon.a (mon.0) 23 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364595 | 2019-10-06 17:27:55 | 2019-10-06 22:09:35 | 2019-10-06 22:35:34 | 0:25:59 | 0:16:25 | 0:09:34 | smithi | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |
fail | 4364596 | 2019-10-06 17:27:56 | 2019-10-06 22:09:40 | 2019-10-07 00:07:41 | 1:58:01 | 1:43:28 | 0:14:33 | smithi | master | rhel | 7.6 | rados/multimon/{clusters/3.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_recovery.yaml} | 2 | |
Failure Reason:
"2019-10-06T23:42:58.442303+0000 mon.a (mon.1) 252 : cluster [WRN] Health check failed: 1 daemons have recently crashed (RECENT_CRASH)" in cluster log |
||||||||||||||
fail | 4364597 | 2019-10-06 17:27:57 | 2019-10-06 22:10:37 | 2019-10-07 00:44:39 | 2:34:02 | 2:20:28 | 0:13:34 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
"2019-10-06T22:43:49.218144+0000 mon.b (mon.0) 176 : cluster [WRN] Health check failed: 23 slow ops, oldest one blocked for 49 sec, daemons [osd.0,mon.a,mon.b,mon.c] have slow ops. (SLOW_OPS)" in cluster log |
||||||||||||||
fail | 4364598 | 2019-10-06 17:27:58 | 2019-10-06 22:10:39 | 2019-10-06 22:40:39 | 0:30:00 | 0:13:29 | 0:16:31 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Command failed on smithi005 with status 6: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph -- tell 'mon.*' injectargs --mon_osd_down_out_interval=0" |
||||||||||||||
pass | 4364599 | 2019-10-06 17:27:59 | 2019-10-06 22:10:40 | 2019-10-06 22:30:39 | 0:19:59 | 0:07:30 | 0:12:29 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364600 | 2019-10-06 17:28:00 | 2019-10-06 22:13:30 | 2019-10-06 23:23:30 | 1:10:00 | 0:33:24 | 0:36:36 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
pass | 4364601 | 2019-10-06 17:28:01 | 2019-10-06 22:13:48 | 2019-10-06 22:33:47 | 0:19:59 | 0:13:17 | 0:06:42 | smithi | master | rhel | 7.6 | rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4364602 | 2019-10-06 17:28:01 | 2019-10-06 22:14:16 | 2019-10-06 22:38:15 | 0:23:59 | 0:17:08 | 0:06:51 | smithi | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
pass | 4364603 | 2019-10-06 17:28:02 | 2019-10-06 22:14:48 | 2019-10-06 23:24:48 | 1:10:00 | 0:14:07 | 0:55:53 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} | 2 | |
pass | 4364604 | 2019-10-06 17:28:03 | 2019-10-06 22:14:48 | 2019-10-06 22:36:47 | 0:21:59 | 0:13:57 | 0:08:02 | smithi | master | rhel | 7.6 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4364605 | 2019-10-06 17:28:04 | 2019-10-06 22:17:14 | 2019-10-06 22:37:13 | 0:19:59 | 0:10:22 | 0:09:37 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 4364606 | 2019-10-06 17:28:05 | 2019-10-06 22:17:14 | 2019-10-06 22:47:13 | 0:29:59 | 0:18:27 | 0:11:32 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:40:22.371047+0000 mon.b (mon.0) 1193 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364607 | 2019-10-06 17:28:06 | 2019-10-06 22:18:27 | 2019-10-06 23:20:27 | 1:02:00 | 0:52:34 | 0:09:26 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/scrub_test.yaml} | 2 | |
Failure Reason:
Command failed on smithi132 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
pass | 4364608 | 2019-10-06 17:28:07 | 2019-10-06 22:18:36 | 2019-10-06 22:52:35 | 0:33:59 | 0:23:53 | 0:10:06 | smithi | master | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364609 | 2019-10-06 17:28:08 | 2019-10-06 22:18:36 | 2019-10-06 22:36:35 | 0:17:59 | 0:07:28 | 0:10:31 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364610 | 2019-10-06 17:28:09 | 2019-10-06 22:18:37 | 2019-10-06 22:48:37 | 0:30:00 | 0:21:00 | 0:09:00 | smithi | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} | 2 | |
pass | 4364611 | 2019-10-06 17:28:10 | 2019-10-06 22:18:48 | 2019-10-06 22:42:47 | 0:23:59 | 0:12:25 | 0:11:34 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
pass | 4364612 | 2019-10-06 17:28:11 | 2019-10-06 22:19:43 | 2019-10-06 22:41:42 | 0:21:59 | 0:14:47 | 0:07:12 | smithi | master | rhel | 7.6 | rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4364613 | 2019-10-06 17:28:12 | 2019-10-06 22:19:47 | 2019-10-06 23:07:47 | 0:48:00 | 0:36:10 | 0:11:50 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:42:09.546934+0000 mon.b (mon.0) 24 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364614 | 2019-10-06 17:28:13 | 2019-10-06 22:19:59 | 2019-10-07 01:20:01 | 3:00:02 | 2:45:54 | 0:14:08 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:42:54.432303+0000 mon.a (mon.0) 54 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364615 | 2019-10-06 17:28:14 | 2019-10-06 22:21:14 | 2019-10-06 23:13:13 | 0:51:59 | 0:16:32 | 0:35:27 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
fail | 4364616 | 2019-10-06 17:28:15 | 2019-10-06 22:21:18 | 2019-10-07 00:39:19 | 2:18:01 | 0:23:30 | 1:54:31 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
"2019-10-07T00:20:17.375715+0000 mon.a (mon.0) 21 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364617 | 2019-10-06 17:28:15 | 2019-10-06 22:22:35 | 2019-10-06 23:28:35 | 1:06:00 | 0:26:41 | 0:39:19 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 4364618 | 2019-10-06 17:28:16 | 2019-10-06 22:22:35 | 2019-10-06 22:40:34 | 0:17:59 | 0:08:47 | 0:09:12 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 4364619 | 2019-10-06 17:28:17 | 2019-10-06 22:23:14 | 2019-10-06 23:35:14 | 1:12:00 | 0:39:53 | 0:32:07 | smithi | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
Failure Reason:
Command failed on smithi175 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool rm unique_pool_0 unique_pool_0 --yes-i-really-really-mean-it' |
||||||||||||||
fail | 4364620 | 2019-10-06 17:28:18 | 2019-10-06 22:24:12 | 2019-10-07 02:10:15 | 3:46:03 | 3:34:18 | 0:11:45 | smithi | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 4364621 | 2019-10-06 17:28:19 | 2019-10-06 22:24:43 | 2019-10-07 00:20:44 | 1:56:01 | 1:36:23 | 0:19:38 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
Command failed on smithi154 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 4364622 | 2019-10-06 17:28:20 | 2019-10-06 22:24:50 | 2019-10-06 23:28:50 | 1:04:00 | 0:39:39 | 0:24:21 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
Failure Reason:
"2019-10-06T23:07:46.837733+0000 mon.a (mon.0) 421 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364623 | 2019-10-06 17:28:21 | 2019-10-06 22:26:34 | 2019-10-06 23:28:34 | 1:02:00 | 0:48:26 | 0:13:34 | smithi | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:47:17.540295+0000 mon.a (mon.1) 170 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364624 | 2019-10-06 17:28:22 | 2019-10-06 22:26:35 | 2019-10-06 22:48:35 | 0:22:00 | 0:16:21 | 0:05:39 | smithi | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-basic-min-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4M_seq_read.yaml} | 1 | |
pass | 4364625 | 2019-10-06 17:28:23 | 2019-10-06 22:29:07 | 2019-10-06 22:45:06 | 0:15:59 | 0:08:19 | 0:07:40 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 4364626 | 2019-10-06 17:28:24 | 2019-10-06 22:30:56 | 2019-10-06 23:00:55 | 0:29:59 | 0:14:30 | 0:15:29 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:50:37.680862+0000 mon.b (mon.1) 65 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364627 | 2019-10-06 17:28:25 | 2019-10-06 22:31:51 | 2019-10-06 22:53:50 | 0:21:59 | 0:12:06 | 0:09:53 | smithi | master | centos | 7.6 | rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4364628 | 2019-10-06 17:28:26 | 2019-10-06 22:32:57 | 2019-10-07 00:54:58 | 2:22:01 | 2:13:05 | 0:08:56 | smithi | master | ubuntu | 18.04 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364629 | 2019-10-06 17:28:27 | 2019-10-06 22:33:40 | 2019-10-06 23:13:40 | 0:40:00 | 0:28:48 | 0:11:12 | smithi | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/misc.yaml} | 1 | |
pass | 4364630 | 2019-10-06 17:28:28 | 2019-10-06 22:33:48 | 2019-10-06 22:57:47 | 0:23:59 | 0:10:39 | 0:13:20 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/prometheus.yaml} | 2 | |
fail | 4364631 | 2019-10-06 17:28:29 | 2019-10-06 22:35:49 | 2019-10-06 23:47:49 | 1:12:00 | 1:00:53 | 0:11:07 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Command failed on smithi059 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool rm unique_pool_0 unique_pool_0 --yes-i-really-really-mean-it' |
||||||||||||||
pass | 4364632 | 2019-10-06 17:28:30 | 2019-10-06 22:36:25 | 2019-10-06 22:58:24 | 0:21:59 | 0:13:37 | 0:08:22 | smithi | master | rhel | 7.6 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
pass | 4364633 | 2019-10-06 17:28:31 | 2019-10-06 22:36:37 | 2019-10-06 23:00:37 | 0:24:00 | 0:11:15 | 0:12:45 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |||
fail | 4364634 | 2019-10-06 17:28:32 | 2019-10-06 22:36:49 | 2019-10-06 23:26:48 | 0:49:59 | 0:40:41 | 0:09:18 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-10-06T22:50:24.781513+0000 mon.b (mon.1) 22 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364635 | 2019-10-06 17:28:33 | 2019-10-06 22:37:10 | 2019-10-07 03:11:14 | 4:34:04 | 0:26:06 | 4:07:58 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
pass | 4364636 | 2019-10-06 17:28:34 | 2019-10-06 22:37:15 | 2019-10-06 22:59:14 | 0:21:59 | 0:12:29 | 0:09:30 | smithi | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
dead | 4364637 | 2019-10-06 17:28:35 | 2019-10-06 22:38:30 | 2019-10-07 10:40:57 | 12:02:27 | 11:50:43 | 0:11:44 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
SSH connection to smithi120 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 10000 --objects 6600 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 1200 --op read 100 --op write 50 --op copy_from 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
pass | 4364638 | 2019-10-06 17:28:35 | 2019-10-06 22:38:38 | 2019-10-06 22:54:38 | 0:16:00 | 0:07:11 | 0:08:49 | smithi | master | ubuntu | 18.04 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364639 | 2019-10-06 17:28:36 | 2019-10-06 22:38:40 | 2019-10-06 23:00:39 | 0:21:59 | 0:15:25 | 0:06:34 | smithi | master | rhel | 7.6 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4364640 | 2019-10-06 17:28:37 | 2019-10-06 22:39:09 | 2019-10-06 23:17:08 | 0:37:59 | 0:14:19 | 0:23:40 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_cls_all.yaml} | 2 | |
fail | 4364641 | 2019-10-06 17:28:38 | 2019-10-06 22:39:51 | 2019-10-06 23:09:50 | 0:29:59 | 0:14:48 | 0:15:11 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
"2019-10-06T22:59:30.579255+0000 mon.b (mon.1) 16 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364642 | 2019-10-06 17:28:39 | 2019-10-06 22:40:26 | 2019-10-06 23:20:26 | 0:40:00 | 0:31:11 | 0:08:49 | smithi | master | centos | 7.6 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4364643 | 2019-10-06 17:28:40 | 2019-10-06 22:40:35 | 2019-10-07 00:26:36 | 1:46:01 | 1:31:45 | 0:14:16 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-partial-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Command failed on smithi136 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
pass | 4364644 | 2019-10-06 17:28:41 | 2019-10-06 22:40:40 | 2019-10-06 23:20:40 | 0:40:00 | 0:11:00 | 0:29:00 | smithi | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
pass | 4364645 | 2019-10-06 17:28:42 | 2019-10-06 22:41:58 | 2019-10-06 23:15:57 | 0:33:59 | 0:26:21 | 0:07:38 | smithi | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
fail | 4364646 | 2019-10-06 17:28:43 | 2019-10-06 22:42:49 | 2019-10-06 23:36:48 | 0:53:59 | 0:29:30 | 0:24:29 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-async-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
Failure Reason:
"2019-10-06T23:12:58.519574+0000 mon.a (mon.1) 52 : cluster [WRN] Health check update: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364647 | 2019-10-06 17:28:44 | 2019-10-06 22:44:50 | 2019-10-07 00:42:51 | 1:58:01 | 1:44:08 | 0:13:53 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
"2019-10-06T23:03:54.357799+0000 mon.b (mon.1) 11 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364648 | 2019-10-06 17:28:45 | 2019-10-06 22:44:50 | 2019-10-06 23:50:50 | 1:06:00 | 0:59:52 | 0:06:08 | smithi | master | rhel | 7.6 | rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed on smithi018 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json' |
||||||||||||||
pass | 4364649 | 2019-10-06 17:28:46 | 2019-10-06 22:45:08 | 2019-10-06 23:05:07 | 0:19:59 | 0:08:24 | 0:11:35 | smithi | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4364650 | 2019-10-06 17:28:47 | 2019-10-06 22:46:16 | 2019-10-07 01:22:18 | 2:36:02 | 2:27:22 | 0:08:40 | smithi | master | centos | 7.6 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4364651 | 2019-10-06 17:28:48 | 2019-10-06 22:47:13 | 2019-10-07 02:31:15 | 3:44:02 | 3:26:42 | 0:17:20 | smithi | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
Failure Reason:
Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi023 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh' |
||||||||||||||
fail | 4364652 | 2019-10-06 17:28:49 | 2019-10-06 22:47:15 | 2019-10-06 23:25:14 | 0:37:59 | 0:24:52 | 0:13:07 | smithi | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Command failed on smithi060 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 4364653 | 2019-10-06 17:28:50 | 2019-10-06 22:48:50 | 2019-10-07 00:08:50 | 1:20:00 | 1:00:17 | 0:19:43 | smithi | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
"2019-10-06T23:16:26.278512+0000 mon.a (mon.0) 16 : cluster [WRN] Health check update: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364654 | 2019-10-06 17:28:51 | 2019-10-06 22:48:50 | 2019-10-07 02:56:54 | 4:08:04 | 0:26:47 | 3:41:17 | smithi | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
"2019-10-07T02:42:15.029205+0000 mon.a (mon.0) 21 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364655 | 2019-10-06 17:28:51 | 2019-10-06 22:49:36 | 2019-10-07 01:01:37 | 2:12:01 | 1:16:41 | 0:55:20 | smithi | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
pass | 4364656 | 2019-10-06 17:28:52 | 2019-10-06 22:49:49 | 2019-10-06 23:29:49 | 0:40:00 | 0:29:35 | 0:10:25 | smithi | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 4364657 | 2019-10-06 17:28:53 | 2019-10-06 22:52:50 | 2019-10-06 23:10:49 | 0:17:59 | 0:09:55 | 0:08:04 | smithi | master | centos | 7.6 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4364658 | 2019-10-06 17:28:54 | 2019-10-06 22:52:50 | 2019-10-06 23:34:50 | 0:42:00 | 0:28:42 | 0:13:18 | smithi | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_python.yaml} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
pass | 4364659 | 2019-10-06 17:28:55 | 2019-10-06 22:53:48 | 2019-10-06 23:11:48 | 0:18:00 | 0:09:20 | 0:08:40 | smithi | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_fio.yaml} | 1 | |
fail | 4364660 | 2019-10-06 17:28:56 | 2019-10-06 22:54:06 | 2019-10-07 00:12:06 | 1:18:00 | 1:04:34 | 0:13:26 | smithi | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-async-partial-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
"2019-10-06T23:15:28.647374+0000 mon.a (mon.0) 25 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log |
||||||||||||||
pass | 4364661 | 2019-10-06 17:28:57 | 2019-10-06 22:54:39 | 2019-10-06 23:18:38 | 0:23:59 | 0:17:57 | 0:06:02 | smithi | master | rhel | 7.6 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4364662 | 2019-10-06 17:28:58 | 2019-10-06 22:55:03 | 2019-10-06 23:21:02 | 0:25:59 | 0:08:47 | 0:17:12 | smithi | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/workunits.yaml} | 2 | |
fail | 4364663 | 2019-10-06 17:28:59 | 2019-10-06 22:56:24 | 2019-10-07 02:36:27 | 3:40:03 | 3:30:22 | 0:09:41 | smithi | master | centos | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi077 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 4364664 | 2019-10-06 17:29:00 | 2019-10-06 22:56:25 | 2019-10-06 23:16:25 | 0:20:00 | 0:11:05 | 0:08:55 | smithi | master | centos | 7.6 | rados/singleton-nomsgr/{all/lazy_omap_stats_output.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4364665 | 2019-10-06 17:29:01 | 2019-10-06 22:56:39 | 2019-10-06 23:48:39 | 0:52:00 | 0:45:28 | 0:06:32 | smithi | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/mon.yaml} | 1 | |
pass | 4364666 | 2019-10-06 17:29:02 | 2019-10-06 22:58:03 | 2019-10-07 05:36:09 | 6:38:06 | 3:14:28 | 3:23:38 | smithi | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |
fail | 4364667 | 2019-10-06 17:29:03 | 2019-10-06 22:58:05 | 2019-10-06 23:34:05 | 0:36:00 | 0:23:20 | 0:12:40 | smithi | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
"2019-10-06T23:15:53.143775+0000 mon.b (mon.1) 14 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 4364668 | 2019-10-06 17:29:04 | 2019-10-06 22:58:25 | 2019-10-06 23:50:25 | 0:52:00 | 0:22:26 | 0:29:34 | smithi | master | rhel | 7.6 | rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
Failure Reason:
Command failed on smithi085 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph quorum_status' |
||||||||||||||
fail | 4364669 | 2019-10-06 17:29:04 | 2019-10-06 22:59:29 | 2019-10-07 05:53:36 | 6:54:07 | 6:44:08 | 0:09:59 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi095 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f98fc15e0b0631a25e909062c2def999ad7f2350 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |