Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6301419 2021-07-29 16:36:27 2021-07-29 16:38:26 2021-07-29 17:20:28 0:42:02 0:32:16 0:09:46 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
dead 6301420 2021-07-29 16:36:28 2021-07-29 16:40:17 2021-07-30 04:53:35 12:13:18 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

hit max job timeout

fail 6301421 2021-07-29 16:36:29 2021-07-29 16:44:27 2021-07-29 17:41:05 0:56:38 0:40:56 0:15:42 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd-backfill} 1
Failure Reason:

Command failed (workunit test osd-backfill/osd-backfill-space.sh) on smithi205 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5f34ec8eb81f73f4f9e714a259fae9396e310b5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd-backfill/osd-backfill-space.sh'

pass 6301422 2021-07-29 16:36:30 2021-07-29 16:49:58 2021-07-29 17:19:16 0:29:18 0:20:13 0:09:05 smithi master rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6301423 2021-07-29 16:36:31 2021-07-29 16:51:59 2021-07-29 18:45:42 1:53:43 1:42:50 0:10:53 smithi master centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/fastread thrashosds-health workloads/ec-radosbench} 2
dead 6301424 2021-07-29 16:36:32 2021-07-29 16:52:19 2021-07-30 05:01:34 12:09:15 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/small-objects} 2
Failure Reason:

hit max job timeout

fail 6301425 2021-07-29 16:36:33 2021-07-29 16:52:39 2021-07-29 18:25:38 1:32:59 1:09:14 0:23:45 smithi master centos 8.stream rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{centos_8.stream}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\''

fail 6301426 2021-07-29 16:36:34 2021-07-29 16:52:40 2021-07-29 17:23:48 0:31:08 0:19:38 0:11:30 smithi master centos 8.2 rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/classic random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi166 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5f34ec8eb81f73f4f9e714a259fae9396e310b5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6301427 2021-07-29 16:36:35 2021-07-29 16:55:00 2021-07-29 18:51:03 1:56:03 1:47:50 0:08:13 smithi master rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} 1
Failure Reason:

Command failed (workunit test osd/pg-split-merge.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5f34ec8eb81f73f4f9e714a259fae9396e310b5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/pg-split-merge.sh'

pass 6301428 2021-07-29 16:36:36 2021-07-29 16:55:40 2021-07-29 17:35:47 0:40:07 0:29:38 0:10:29 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 2
dead 6301429 2021-07-29 16:36:37 2021-07-29 16:56:01 2021-07-30 04:58:48 12:02:47 smithi master centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6301430 2021-07-29 16:36:38 2021-07-29 16:56:51 2021-07-29 17:29:34 0:32:43 0:12:26 0:20:17 smithi master ubuntu 20.04 rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*SyntheticMatrixC*/2 --gtest_catch_exceptions=0\''

pass 6301431 2021-07-29 16:36:39 2021-07-29 16:57:21 2021-07-29 17:38:25 0:41:04 0:32:30 0:08:34 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
pass 6301432 2021-07-29 16:36:40 2021-07-29 16:57:21 2021-07-29 17:24:30 0:27:09 0:17:27 0:09:42 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} 1
pass 6301433 2021-07-29 16:36:41 2021-07-29 16:57:52 2021-07-29 17:38:12 0:40:20 0:28:28 0:11:52 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6301434 2021-07-29 16:36:42 2021-07-29 16:59:02 2021-07-29 17:45:42 0:46:40 0:37:09 0:09:31 smithi master centos 8.3 rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5f34ec8eb81f73f4f9e714a259fae9396e310b5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

pass 6301435 2021-07-29 16:36:43 2021-07-29 16:59:02 2021-07-29 17:21:43 0:22:41 0:10:41 0:12:00 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6301436 2021-07-29 16:36:44 2021-07-29 17:00:13 2021-07-29 17:27:02 0:26:49 0:16:12 0:10:37 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-agent-small} 2
dead 6301437 2021-07-29 16:36:45 2021-07-29 17:00:23 2021-07-30 04:59:22 11:58:59 smithi master centos 8.stream rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8.stream}} 2
pass 6301438 2021-07-29 16:36:46 2021-07-29 17:01:43 2021-07-29 17:20:28 0:18:45 0:10:13 0:08:32 smithi master centos 8.stream rados/objectstore/{backends/alloc-hint supported-random-distro$/{centos_8.stream}} 1
pass 6301439 2021-07-29 16:36:47 2021-07-29 17:01:44 2021-07-29 17:44:49 0:43:05 0:36:05 0:07:00 smithi master rhel 8.4 rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 2
fail 6301440 2021-07-29 16:36:48 2021-07-29 17:01:54 2021-07-29 17:25:01 0:23:07 0:12:29 0:10:38 smithi master centos 8.3 rados/cephadm/orchestrator_cli/{0-random-distro$/{centos_8.3_kubic_stable} 2-node-mgr orchestrator_cli} 2
Failure Reason:

Test failure: test_host_rm (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

fail 6301441 2021-07-29 16:36:49 2021-07-29 17:02:04 2021-07-29 20:25:06 3:23:02 3:13:50 0:09:12 smithi master centos 8.stream rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8.stream}} 2
Failure Reason:

Command failed (workunit test rados/load-gen-mix-small.sh) on smithi074 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5f34ec8eb81f73f4f9e714a259fae9396e310b5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix-small.sh'

pass 6301442 2021-07-29 16:36:50 2021-07-29 17:02:04 2021-07-29 17:23:48 0:21:44 0:11:57 0:09:47 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
pass 6301443 2021-07-29 16:36:51 2021-07-29 17:02:35 2021-07-29 17:43:07 0:40:32 0:32:36 0:07:56 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 6301444 2021-07-29 16:36:52 2021-07-29 17:02:35 2021-07-29 17:43:50 0:41:15 0:32:07 0:09:08 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_kubic_stable 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
pass 6301445 2021-07-29 16:36:52 2021-07-29 17:02:45 2021-07-29 17:24:54 0:22:09 0:09:50 0:12:19 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/dedup-io-mixed} 2
pass 6301446 2021-07-29 16:36:53 2021-07-29 17:03:05 2021-07-29 17:33:27 0:30:22 0:17:56 0:12:26 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
fail 6301447 2021-07-29 16:36:54 2021-07-29 17:03:46 2021-07-29 17:49:10 0:45:24 0:34:49 0:10:35 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/erasure-code} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5f34ec8eb81f73f4f9e714a259fae9396e310b5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh'

fail 6301448 2021-07-29 16:36:55 2021-07-29 17:03:46 2021-07-29 17:33:43 0:29:57 0:19:41 0:10:16 smithi master centos 8.2 rados/dashboard/{centos_8.2_kubic_stable debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi121 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5f34ec8eb81f73f4f9e714a259fae9396e310b5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6301449 2021-07-29 16:36:56 2021-07-29 17:04:46 2021-07-29 17:49:05 0:44:19 0:31:27 0:12:52 smithi master centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} 1
pass 6301450 2021-07-29 16:36:57 2021-07-29 17:07:37 2021-07-29 17:34:38 0:27:01 0:16:48 0:10:13 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream} tasks/repair_test} 2
dead 6301451 2021-07-29 16:36:58 2021-07-29 17:07:37 2021-07-30 04:58:32 11:50:55 smithi master centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
dead 6301452 2021-07-29 16:36:59 2021-07-29 17:08:38 2021-07-30 04:58:43 11:50:05 smithi master centos 8.3 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 6301453 2021-07-29 16:37:00 2021-07-29 17:08:58 2021-07-29 17:39:52 0:30:54 0:21:25 0:09:29 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

saw valgrind issues

pass 6301454 2021-07-29 16:37:01 2021-07-29 17:08:58 2021-07-29 18:12:03 1:03:05 0:55:14 0:07:51 smithi master ubuntu 20.04 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
pass 6301455 2021-07-29 16:37:02 2021-07-29 17:08:58 2021-07-29 17:36:01 0:27:03 0:16:49 0:10:14 smithi master centos 8.2 rados/cephadm/workunits/{0-distro/centos_8.2_kubic_stable mon_election/classic task/test_cephadm} 1
pass 6301456 2021-07-29 16:37:03 2021-07-29 17:09:09 2021-07-29 17:37:26 0:28:17 0:17:23 0:10:54 smithi master ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 1-start 2-services/mirror 3-final} 2
pass 6301457 2021-07-29 16:37:04 2021-07-29 17:09:29 2021-07-29 19:54:39 2:45:10 2:39:00 0:06:10 smithi master rhel 8.4 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{rhel_8}} 1
pass 6301458 2021-07-29 16:37:05 2021-07-29 17:09:29 2021-07-29 17:48:25 0:38:56 0:29:19 0:09:37 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_kubic_stable 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6301459 2021-07-29 16:37:06 2021-07-29 17:09:29 2021-07-29 17:54:27 0:44:58 0:32:55 0:12:03 smithi master centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_8.stream} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
dead 6301460 2021-07-29 16:37:07 2021-07-29 17:11:20 2021-07-30 04:58:27 11:47:07 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/small-objects} 2
fail 6301461 2021-07-29 16:37:08 2021-07-29 17:12:10 2021-07-29 17:41:30 0:29:20 0:22:17 0:07:03 smithi master rhel 8.3 rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.3_kubic_stable} 2-node-mgr orchestrator_cli} 2
Failure Reason:

Test failure: test_host_rm (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

dead 6301462 2021-07-29 16:37:09 2021-07-29 17:12:20 2021-07-30 04:59:59 11:47:39 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 6301463 2021-07-29 16:37:10 2021-07-29 17:15:31 2021-07-29 17:55:07 0:39:36 0:29:57 0:09:39 smithi master ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1