Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6538302 2021-12-01 14:09:37 2021-12-01 15:38:10 2021-12-01 16:20:38 0:42:28 0:31:44 0:10:44 smithi master ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6538303 2021-12-01 14:09:38 2021-12-01 15:38:10 2021-12-01 15:58:38 0:20:28 0:08:27 0:12:01 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi012 with status 5: 'sudo systemctl stop ceph-288d74ee-52bf-11ec-8c2d-001a4aab830c@mon.a'

pass 6538304 2021-12-01 14:09:39 2021-12-01 15:38:10 2021-12-01 16:17:40 0:39:30 0:29:57 0:09:33 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6538305 2021-12-01 14:09:40 2021-12-01 15:38:11 2021-12-01 16:04:09 0:25:58 0:14:52 0:11:06 smithi master rados/cephadm/workunits/{agent/on mon_election/classic task/test_orch_cli} 1
Failure Reason:

Test failure: test_daemon_restart (tasks.cephadm_cases.test_cli.TestCephadmCLI)

pass 6538306 2021-12-01 14:09:41 2021-12-01 15:38:11 2021-12-01 16:02:10 0:23:59 0:13:55 0:10:04 smithi master ubuntu 20.04 rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 3
fail 6538307 2021-12-01 14:09:42 2021-12-01 15:38:11 2021-12-01 16:02:26 0:24:15 0:13:29 0:10:46 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi150 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 57151d80-52bf-11ec-8c2d-001a4aab830c -- ceph orch daemon add osd smithi150:vg_nvme/lv_3'

fail 6538308 2021-12-01 14:09:43 2021-12-01 15:39:02 2021-12-01 15:59:19 0:20:17 0:08:30 0:11:47 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi026 with status 5: 'sudo systemctl stop ceph-43c736c8-52bf-11ec-8c2d-001a4aab830c@mon.a'

pass 6538309 2021-12-01 14:09:45 2021-12-01 15:39:52 2021-12-01 16:01:14 0:21:22 0:10:11 0:11:11 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6538310 2021-12-01 14:09:46 2021-12-01 15:41:13 2021-12-01 16:20:55 0:39:42 0:29:24 0:10:18 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6538311 2021-12-01 14:09:47 2021-12-01 15:41:24 2021-12-01 16:19:24 0:38:00 0:27:03 0:10:57 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi120 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c25b3a7acbd7575209c931c35ea4d997dc77009d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 6538312 2021-12-01 14:09:48 2021-12-01 15:42:34 2021-12-01 16:24:15 0:41:41 0:32:01 0:09:40 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/module_selftest} 2
fail 6538313 2021-12-01 14:09:49 2021-12-01 15:42:34 2021-12-01 15:59:30 0:16:56 0:06:09 0:10:47 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/host rook/master} 3
Failure Reason:

[Errno 2] Cannot find file on the remote 'ubuntu@smithi133.front.sepia.ceph.com': 'rook/cluster/examples/kubernetes/ceph/operator.yaml'

fail 6538314 2021-12-01 14:09:50 2021-12-01 15:43:05 2021-12-01 16:14:33 0:31:28 0:22:10 0:09:18 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi013 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c25b3a7acbd7575209c931c35ea4d997dc77009d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6538315 2021-12-01 14:09:51 2021-12-01 15:43:05 2021-12-01 16:07:09 0:24:04 0:13:26 0:10:38 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 6538316 2021-12-01 14:09:52 2021-12-01 15:44:06 2021-12-01 16:04:46 0:20:40 0:08:35 0:12:05 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/pacific backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi066 with status 5: 'sudo systemctl stop ceph-06640ff8-52c0-11ec-8c2d-001a4aab830c@mon.a'

pass 6538317 2021-12-01 14:09:53 2021-12-01 15:45:28 2021-12-01 16:07:46 0:22:18 0:10:07 0:12:11 smithi master centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream} tasks/rados_striper} 2
pass 6538318 2021-12-01 14:09:54 2021-12-01 15:45:48 2021-12-01 16:41:19 0:55:31 0:45:44 0:09:47 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/cache-agent-big} 2
pass 6538319 2021-12-01 14:09:55 2021-12-01 15:45:49 2021-12-01 16:24:54 0:39:05 0:28:42 0:10:23 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
fail 6538320 2021-12-01 14:09:56 2021-12-01 15:45:49 2021-12-01 16:06:08 0:20:19 0:09:31 0:10:48 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{ubuntu_latest} tasks/prometheus} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)

dead 6538321 2021-12-01 14:09:57 2021-12-01 15:46:19 2021-12-02 03:55:53 12:09:34 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

pass 6538322 2021-12-01 14:09:58 2021-12-01 15:46:20 2021-12-01 16:14:01 0:27:41 0:20:27 0:07:14 smithi master rhel 8.4 rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
pass 6538323 2021-12-01 14:09:59 2021-12-01 15:46:50 2021-12-01 16:31:04 0:44:14 0:34:34 0:09:40 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} 2
pass 6538324 2021-12-01 14:10:00 2021-12-01 15:47:41 2021-12-01 16:10:57 0:23:16 0:13:18 0:09:58 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
fail 6538325 2021-12-01 14:10:01 2021-12-01 15:47:41 2021-12-01 16:07:05 0:19:24 0:08:27 0:10:57 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi053 with status 5: 'sudo systemctl stop ceph-570a8388-52c0-11ec-8c2d-001a4aab830c@mon.a'

fail 6538326 2021-12-01 14:10:02 2021-12-01 15:47:41 2021-12-01 16:25:25 0:37:44 0:28:27 0:09:17 smithi master centos 8.3 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c25b3a7acbd7575209c931c35ea4d997dc77009d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 6538327 2021-12-01 14:10:03 2021-12-01 15:47:42 2021-12-01 17:59:32 2:11:50 2:02:46 0:09:04 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
pass 6538328 2021-12-01 14:10:04 2021-12-01 15:48:12 2021-12-01 16:19:51 0:31:39 0:23:27 0:08:12 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6538329 2021-12-01 14:10:06 2021-12-01 15:50:33 2021-12-01 16:59:05 1:08:32 1:00:09 0:08:23 smithi master rhel 8.4 rados/cephadm/thrash/{0-distro/rhel_8.4_container_tools_3.0 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
pass 6538330 2021-12-01 14:10:07 2021-12-01 15:51:53 2021-12-01 16:19:52 0:27:59 0:16:49 0:11:10 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
pass 6538331 2021-12-01 14:10:08 2021-12-01 15:51:54 2021-12-01 17:31:34 1:39:40 1:33:02 0:06:38 smithi master rhel 8.4 rados/upgrade/parallel/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 6538332 2021-12-01 14:10:09 2021-12-01 15:52:24 2021-12-01 16:31:26 0:39:02 0:29:44 0:09:18 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6538333 2021-12-01 14:10:10 2021-12-01 15:52:24 2021-12-01 16:41:28 0:49:04 0:42:44 0:06:20 smithi master rhel 8.4 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
pass 6538334 2021-12-01 14:10:11 2021-12-01 15:52:25 2021-12-01 16:19:53 0:27:28 0:14:31 0:12:57 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6538335 2021-12-01 14:10:12 2021-12-01 15:54:26 2021-12-01 16:17:36 0:23:10 0:14:11 0:08:59 smithi master centos 8.3 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 6538336 2021-12-01 14:10:13 2021-12-01 15:54:26 2021-12-01 16:29:24 0:34:58 0:28:11 0:06:47 smithi master rhel 8.4 rados/cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 2-node-mgr agent/off orchestrator_cli} 2
dead 6538337 2021-12-01 14:10:14 2021-12-01 15:54:26 2021-12-01 16:09:26 0:15:00 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6538338 2021-12-01 14:10:15 2021-12-01 15:54:27 2021-12-01 16:41:37 0:47:10 0:40:02 0:07:08 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/rados_api_tests} 2
pass 6538339 2021-12-01 14:10:16 2021-12-01 15:54:37 2021-12-01 16:23:56 0:29:19 0:22:12 0:07:07 smithi master rhel 8.4 rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 2
pass 6538340 2021-12-01 14:10:17 2021-12-01 15:54:37 2021-12-01 16:20:55 0:26:18 0:16:33 0:09:45 smithi master centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{centos_8.stream} tasks/failover} 2
pass 6538341 2021-12-01 14:10:18 2021-12-01 15:55:28 2021-12-01 16:22:41 0:27:13 0:17:22 0:09:51 smithi master centos 8.3 rados/cephadm/smoke-roleless/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 6538342 2021-12-01 14:10:19 2021-12-01 15:55:28 2021-12-01 16:47:09 0:51:41 0:44:16 0:07:25 smithi master rhel 8.4 rados/cephadm/thrash/{0-distro/rhel_8.4_container_tools_rhel8 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
pass 6538343 2021-12-01 14:10:20 2021-12-01 15:55:39 2021-12-01 16:19:03 0:23:24 0:14:27 0:08:57 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
pass 6538344 2021-12-01 14:10:21 2021-12-01 15:55:39 2021-12-01 16:24:56 0:29:17 0:21:00 0:08:17 smithi master rhel 8.4 rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 6538345 2021-12-01 14:10:22 2021-12-01 15:57:30 2021-12-01 16:22:44 0:25:14 0:14:18 0:10:56 smithi master centos 8.3 rados/cephadm/osds/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 6538346 2021-12-01 14:10:24 2021-12-01 15:59:22 2021-12-01 16:21:02 0:21:40 0:10:30 0:11:10 smithi master centos 8.stream rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} tasks/mon_clock_with_skews} 3
pass 6538347 2021-12-01 14:10:25 2021-12-01 15:59:33 2021-12-01 16:49:40 0:50:07 0:43:30 0:06:37 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6538348 2021-12-01 14:10:26 2021-12-01 16:00:13 2021-12-01 16:36:20 0:36:07 0:25:08 0:10:59 smithi master centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8.stream} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6538349 2021-12-01 14:10:27 2021-12-01 16:01:13 2021-12-01 16:26:51 0:25:38 0:16:19 0:09:19 smithi master centos 8.3 rados/cephadm/smoke/{0-distro/centos_8.3_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 6538350 2021-12-01 14:10:28 2021-12-01 16:01:24 2021-12-01 16:32:56 0:31:32 0:22:19 0:09:13 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 6538351 2021-12-01 14:10:29 2021-12-01 16:01:24 2021-12-01 16:42:48 0:41:24 0:31:45 0:09:39 smithi master centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8.stream} thrashers/many workloads/rados_mon_workunits} 2
pass 6538352 2021-12-01 14:10:30 2021-12-01 16:01:35 2021-12-01 16:40:27 0:38:52 0:27:04 0:11:48 smithi master ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/radosbench-high-concurrency} 2
pass 6538353 2021-12-01 14:10:31 2021-12-01 16:02:15 2021-12-01 16:29:12 0:26:57 0:21:14 0:05:43 smithi master rhel 8.4 rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
pass 6538354 2021-12-01 14:10:32 2021-12-01 16:02:15 2021-12-01 16:46:38 0:44:23 0:32:29 0:11:54 smithi master ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
pass 6538355 2021-12-01 14:10:33 2021-12-01 16:02:26 2021-12-01 16:45:53 0:43:27 0:37:44 0:05:43 smithi master rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 6538356 2021-12-01 14:10:34 2021-12-01 16:02:26 2021-12-01 16:25:52 0:23:26 0:16:40 0:06:46 smithi master rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} tasks/readwrite} 2
dead 6538357 2021-12-01 14:10:35 2021-12-01 16:02:26 2021-12-02 04:11:17 12:08:51 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

pass 6538358 2021-12-01 14:10:36 2021-12-01 16:02:37 2021-12-01 16:21:33 0:18:56 0:09:11 0:09:45 smithi master centos 8.3 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 6538359 2021-12-01 14:10:37 2021-12-01 16:02:37 2021-12-01 16:27:08 0:24:31 0:13:27 0:11:04 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs 3-final} 2
pass 6538360 2021-12-01 14:10:38 2021-12-01 16:03:58 2021-12-01 16:29:50 0:25:52 0:15:55 0:09:57 smithi master rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_cephadm} 1
pass 6538361 2021-12-01 14:10:39 2021-12-01 16:04:08 2021-12-01 16:43:46 0:39:38 0:30:27 0:09:11 smithi master centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8.stream} thrashers/morepggrow thrashosds-health workloads/radosbench} 2
pass 6538362 2021-12-01 14:10:40 2021-12-01 16:04:18 2021-12-01 16:29:30 0:25:12 0:14:18 0:10:54 smithi master centos 8.stream rados/singleton/{all/radostool mon_election/classic msgr-failures/none msgr/async objectstore/filestore-xfs rados supported-random-distro$/{centos_8.stream}} 1
pass 6538363 2021-12-01 14:10:41 2021-12-01 16:04:19 2021-12-01 16:27:52 0:23:33 0:13:08 0:10:25 smithi master centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 6538364 2021-12-01 14:10:42 2021-12-01 16:04:49 2021-12-01 16:28:58 0:24:09 0:13:47 0:10:22 smithi master centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 6538365 2021-12-01 14:10:43 2021-12-01 16:06:10 2021-12-01 16:43:37 0:37:27 0:26:41 0:10:46 smithi master centos 8.stream rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{centos_8.stream}} 1
pass 6538366 2021-12-01 14:10:44 2021-12-01 16:06:10 2021-12-01 16:24:51 0:18:41 0:07:16 0:11:25 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 6538367 2021-12-01 14:10:45 2021-12-01 16:07:10 2021-12-01 16:32:22 0:25:12 0:15:12 0:10:00 smithi master centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 6538368 2021-12-01 14:10:46 2021-12-01 16:07:11 2021-12-01 16:32:09 0:24:58 0:15:26 0:09:32 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{centos_8} tasks/insights} 2
pass 6538369 2021-12-01 14:10:47 2021-12-01 16:07:11 2021-12-01 17:05:43 0:58:32 0:46:11 0:12:21 smithi master ubuntu 20.04 rados/cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
pass 6538370 2021-12-01 14:10:48 2021-12-01 16:07:31 2021-12-01 16:37:37 0:30:06 0:23:27 0:06:39 smithi master rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/redirect} 2
pass 6538371 2021-12-01 14:10:49 2021-12-01 16:07:52 2021-12-01 16:44:11 0:36:19 0:24:53 0:11:26 smithi master ubuntu 20.04 rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 2
dead 6538372 2021-12-01 14:10:50 2021-12-01 16:08:52 2021-12-01 16:24:46 0:15:54 smithi master rhel 8.4 rados/cephadm/with-work/{0-distro/rhel_8.4_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 6538373 2021-12-01 14:10:51 2021-12-01 16:09:43 2021-12-01 16:33:46 0:24:03 0:09:13 0:14:50 smithi master ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6538374 2021-12-01 14:10:52 2021-12-01 16:14:04 2021-12-01 16:58:38 0:44:34 0:32:43 0:11:51 smithi master ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-start-upgrade 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
pass 6538375 2021-12-01 14:10:53 2021-12-01 16:14:34 2021-12-01 16:38:03 0:23:29 0:13:00 0:10:29 smithi master ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_fio} 1
pass 6538376 2021-12-01 14:10:54 2021-12-01 16:14:34 2021-12-01 16:55:18 0:40:44 0:29:34 0:11:10 smithi master centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6538377 2021-12-01 14:10:56 2021-12-01 16:16:05 2021-12-01 16:34:35 0:18:30 0:06:15 0:12:15 smithi master ubuntu 20.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 6538378 2021-12-01 14:10:57 2021-12-01 16:17:05 2021-12-01 20:50:19 4:33:14 4:23:24 0:09:50 smithi master centos 8.stream rados/standalone/{supported-random-distro$/{centos_8.stream} workloads/osd-backfill} 1
pass 6538379 2021-12-01 14:10:57 2021-12-01 16:17:06 2021-12-01 16:54:41 0:37:35 0:27:14 0:10:21 smithi master centos 8.3 rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
pass 6538380 2021-12-01 14:10:59 2021-12-01 16:17:36 2021-12-01 16:42:58 0:25:22 0:15:14 0:10:08 smithi master centos 8.3 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/redirect_promote_tests} 2
pass 6538381 2021-12-01 14:11:00 2021-12-01 16:17:47 2021-12-01 16:46:33 0:28:46 0:21:26 0:07:20 smithi master rhel 8.4 rados/cephadm/smoke-roleless/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 6538382 2021-12-01 14:11:01 2021-12-01 16:19:07 2021-12-01 16:44:08 0:25:01 0:15:16 0:09:45 smithi master centos 8.3 rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 6538383 2021-12-01 14:11:02 2021-12-01 16:19:27 2021-12-01 16:46:24 0:26:57 0:14:28 0:12:29 smithi master ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} 2
pass 6538384 2021-12-01 14:11:03 2021-12-01 16:19:58 2021-12-01 17:23:13 1:03:15 0:52:57 0:10:18 smithi master centos 8.3 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async objectstore/filestore-xfs rados tasks/rados_api_tests validater/valgrind} 2
pass 6538385 2021-12-01 14:11:04 2021-12-01 16:19:58 2021-12-01 16:35:55 0:15:57 0:04:51 0:11:06 smithi master rados/cephadm/workunits/{agent/off mon_election/classic task/test_cephadm_repos} 1
fail 6538386 2021-12-01 14:11:05 2021-12-01 16:19:58 2021-12-01 16:53:42 0:33:44 0:24:27 0:09:17 smithi master ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi093 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c25b3a7acbd7575209c931c35ea4d997dc77009d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 6538387 2021-12-01 14:11:06 2021-12-01 16:19:59 2021-12-01 17:15:09 0:55:10 0:45:35 0:09:35 smithi master centos 8.stream rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} 1
fail 6538388 2021-12-01 14:11:07 2021-12-01 16:19:59 2021-12-01 16:39:27 0:19:28 0:08:11 0:11:17 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi129 with status 5: 'sudo systemctl stop ceph-dbc0c138-52c4-11ec-8c2d-001a4aab830c@mon.a'

fail 6538389 2021-12-01 14:11:08 2021-12-01 16:20:00 2021-12-01 16:50:20 0:30:20 0:19:31 0:10:49 smithi master centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c25b3a7acbd7575209c931c35ea4d997dc77009d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6538390 2021-12-01 14:11:09 2021-12-01 16:20:40 2021-12-01 19:42:41 3:22:01 3:11:59 0:10:02 smithi master ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} 1
pass 6538391 2021-12-01 14:11:10 2021-12-01 16:20:40 2021-12-01 17:20:41 1:00:01 0:52:25 0:07:36 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6538392 2021-12-01 14:11:11 2021-12-01 16:21:01 2021-12-01 16:45:12 0:24:11 0:13:51 0:10:20 smithi master rados/cephadm/workunits/{agent/off mon_election/classic task/test_orch_cli} 1
pass 6538393 2021-12-01 14:11:12 2021-12-01 16:21:01 2021-12-01 16:47:07 0:26:06 0:14:17 0:11:49 smithi master centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8.stream} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 6538394 2021-12-01 14:11:13 2021-12-01 16:21:42 2021-12-01 16:41:54 0:20:12 0:08:46 0:11:26 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi089 with status 5: 'sudo systemctl stop ceph-33d12f02-52c5-11ec-8c2d-001a4aab830c@mon.a'

fail 6538395 2021-12-01 14:11:14 2021-12-01 16:22:42 2021-12-01 16:55:05 0:32:23 0:23:51 0:08:32 smithi master rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic random-objectstore$/{bluestore-bitmap} supported-random-distro$/{rhel_8} tasks/prometheus} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)

dead 6538396 2021-12-01 14:11:15 2021-12-01 16:22:53 2021-12-02 04:33:38 12:10:45 smithi master centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

hit max job timeout

fail 6538397 2021-12-01 14:11:16 2021-12-01 16:24:03 2021-12-01 18:29:03 2:05:00 1:56:39 0:08:21 smithi master centos 8.3 rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi019 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c25b3a7acbd7575209c931c35ea4d997dc77009d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

fail 6538398 2021-12-01 14:11:17 2021-12-01 16:24:04 2021-12-01 17:00:33 0:36:29 0:24:20 0:12:09 smithi master ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c25b3a7acbd7575209c931c35ea4d997dc77009d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 6538399 2021-12-01 14:11:18 2021-12-01 16:24:24 2021-12-01 16:43:17 0:18:53 0:10:40 0:08:13 smithi master centos 8.stream rados/singleton/{all/admin-socket mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8.stream}} 1